1 ===================== 2 YAML I/O 3 ===================== 4 5 .. contents:: 6 :local: 7 8 Introduction to YAML 9 ==================== 10 11 YAML is a human readable data serialization language. The full YAML language 12 spec can be read at `yaml.org 13 <http://www.yaml.org/spec/1.2/spec.html#Introduction>`_. The simplest form of 14 yaml is just "scalars", "mappings", and "sequences". A scalar is any number 15 or string. The pound/hash symbol (#) begins a comment line. A mapping is 16 a set of key-value pairs where the key ends with a colon. For example: 17 18 .. code-block:: yaml 19 20 # a mapping 21 name: Tom 22 hat-size: 7 23 24 A sequence is a list of items where each item starts with a leading dash ('-'). 25 For example: 26 27 .. code-block:: yaml 28 29 # a sequence 30 - x86 31 - x86_64 32 - PowerPC 33 34 You can combine mappings and sequences by indenting. For example a sequence 35 of mappings in which one of the mapping values is itself a sequence: 36 37 .. code-block:: yaml 38 39 # a sequence of mappings with one key's value being a sequence 40 - name: Tom 41 cpus: 42 - x86 43 - x86_64 44 - name: Bob 45 cpus: 46 - x86 47 - name: Dan 48 cpus: 49 - PowerPC 50 - x86 51 52 Sometime sequences are known to be short and the one entry per line is too 53 verbose, so YAML offers an alternate syntax for sequences called a "Flow 54 Sequence" in which you put comma separated sequence elements into square 55 brackets. The above example could then be simplified to : 56 57 58 .. code-block:: yaml 59 60 # a sequence of mappings with one key's value being a flow sequence 61 - name: Tom 62 cpus: [ x86, x86_64 ] 63 - name: Bob 64 cpus: [ x86 ] 65 - name: Dan 66 cpus: [ PowerPC, x86 ] 67 68 69 Introduction to YAML I/O 70 ======================== 71 72 The use of indenting makes the YAML easy for a human to read and understand, 73 but having a program read and write YAML involves a lot of tedious details. 74 The YAML I/O library structures and simplifies reading and writing YAML 75 documents. 76 77 YAML I/O assumes you have some "native" data structures which you want to be 78 able to dump as YAML and recreate from YAML. The first step is to try 79 writing example YAML for your data structures. You may find after looking at 80 possible YAML representations that a direct mapping of your data structures 81 to YAML is not very readable. Often the fields are not in the order that 82 a human would find readable. Or the same information is replicated in multiple 83 locations, making it hard for a human to write such YAML correctly. 84 85 In relational database theory there is a design step called normalization in 86 which you reorganize fields and tables. The same considerations need to 87 go into the design of your YAML encoding. But, you may not want to change 88 your existing native data structures. Therefore, when writing out YAML 89 there may be a normalization step, and when reading YAML there would be a 90 corresponding denormalization step. 91 92 YAML I/O uses a non-invasive, traits based design. YAML I/O defines some 93 abstract base templates. You specialize those templates on your data types. 94 For instance, if you have an enumerated type FooBar you could specialize 95 ScalarEnumerationTraits on that type and define the enumeration() method: 96 97 .. code-block:: c++ 98 99 using llvm::yaml::ScalarEnumerationTraits; 100 using llvm::yaml::IO; 101 102 template <> 103 struct ScalarEnumerationTraits<FooBar> { 104 static void enumeration(IO &io, FooBar &value) { 105 ... 106 } 107 }; 108 109 110 As with all YAML I/O template specializations, the ScalarEnumerationTraits is used for 111 both reading and writing YAML. That is, the mapping between in-memory enum 112 values and the YAML string representation is only in one place. 113 This assures that the code for writing and parsing of YAML stays in sync. 114 115 To specify a YAML mappings, you define a specialization on 116 llvm::yaml::MappingTraits. 117 If your native data structure happens to be a struct that is already normalized, 118 then the specialization is simple. For example: 119 120 .. code-block:: c++ 121 122 using llvm::yaml::MappingTraits; 123 using llvm::yaml::IO; 124 125 template <> 126 struct MappingTraits<Person> { 127 static void mapping(IO &io, Person &info) { 128 io.mapRequired("name", info.name); 129 io.mapOptional("hat-size", info.hatSize); 130 } 131 }; 132 133 134 A YAML sequence is automatically inferred if you data type has begin()/end() 135 iterators and a push_back() method. Therefore any of the STL containers 136 (such as std::vector<>) will automatically translate to YAML sequences. 137 138 Once you have defined specializations for your data types, you can 139 programmatically use YAML I/O to write a YAML document: 140 141 .. code-block:: c++ 142 143 using llvm::yaml::Output; 144 145 Person tom; 146 tom.name = "Tom"; 147 tom.hatSize = 8; 148 Person dan; 149 dan.name = "Dan"; 150 dan.hatSize = 7; 151 std::vector<Person> persons; 152 persons.push_back(tom); 153 persons.push_back(dan); 154 155 Output yout(llvm::outs()); 156 yout << persons; 157 158 This would write the following: 159 160 .. code-block:: yaml 161 162 - name: Tom 163 hat-size: 8 164 - name: Dan 165 hat-size: 7 166 167 And you can also read such YAML documents with the following code: 168 169 .. code-block:: c++ 170 171 using llvm::yaml::Input; 172 173 typedef std::vector<Person> PersonList; 174 std::vector<PersonList> docs; 175 176 Input yin(document.getBuffer()); 177 yin >> docs; 178 179 if ( yin.error() ) 180 return; 181 182 // Process read document 183 for ( PersonList &pl : docs ) { 184 for ( Person &person : pl ) { 185 cout << "name=" << person.name; 186 } 187 } 188 189 One other feature of YAML is the ability to define multiple documents in a 190 single file. That is why reading YAML produces a vector of your document type. 191 192 193 194 Error Handling 195 ============== 196 197 When parsing a YAML document, if the input does not match your schema (as 198 expressed in your XxxTraits<> specializations). YAML I/O 199 will print out an error message and your Input object's error() method will 200 return true. For instance the following document: 201 202 .. code-block:: yaml 203 204 - name: Tom 205 shoe-size: 12 206 - name: Dan 207 hat-size: 7 208 209 Has a key (shoe-size) that is not defined in the schema. YAML I/O will 210 automatically generate this error: 211 212 .. code-block:: yaml 213 214 YAML:2:2: error: unknown key 'shoe-size' 215 shoe-size: 12 216 ^~~~~~~~~ 217 218 Similar errors are produced for other input not conforming to the schema. 219 220 221 Scalars 222 ======= 223 224 YAML scalars are just strings (i.e. not a sequence or mapping). The YAML I/O 225 library provides support for translating between YAML scalars and specific 226 C++ types. 227 228 229 Built-in types 230 -------------- 231 The following types have built-in support in YAML I/O: 232 233 * bool 234 * float 235 * double 236 * StringRef 237 * std::string 238 * int64_t 239 * int32_t 240 * int16_t 241 * int8_t 242 * uint64_t 243 * uint32_t 244 * uint16_t 245 * uint8_t 246 247 That is, you can use those types in fields of MappingTraits or as element type 248 in sequence. When reading, YAML I/O will validate that the string found 249 is convertible to that type and error out if not. 250 251 252 Unique types 253 ------------ 254 Given that YAML I/O is trait based, the selection of how to convert your data 255 to YAML is based on the type of your data. But in C++ type matching, typedefs 256 do not generate unique type names. That means if you have two typedefs of 257 unsigned int, to YAML I/O both types look exactly like unsigned int. To 258 facilitate make unique type names, YAML I/O provides a macro which is used 259 like a typedef on built-in types, but expands to create a class with conversion 260 operators to and from the base type. For example: 261 262 .. code-block:: c++ 263 264 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFooFlags) 265 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyBarFlags) 266 267 This generates two classes MyFooFlags and MyBarFlags which you can use in your 268 native data structures instead of uint32_t. They are implicitly 269 converted to and from uint32_t. The point of creating these unique types 270 is that you can now specify traits on them to get different YAML conversions. 271 272 Hex types 273 --------- 274 An example use of a unique type is that YAML I/O provides fixed sized unsigned 275 integers that are written with YAML I/O as hexadecimal instead of the decimal 276 format used by the built-in integer types: 277 278 * Hex64 279 * Hex32 280 * Hex16 281 * Hex8 282 283 You can use llvm::yaml::Hex32 instead of uint32_t and the only different will 284 be that when YAML I/O writes out that type it will be formatted in hexadecimal. 285 286 287 ScalarEnumerationTraits 288 ----------------------- 289 YAML I/O supports translating between in-memory enumerations and a set of string 290 values in YAML documents. This is done by specializing ScalarEnumerationTraits<> 291 on your enumeration type and define a enumeration() method. 292 For instance, suppose you had an enumeration of CPUs and a struct with it as 293 a field: 294 295 .. code-block:: c++ 296 297 enum CPUs { 298 cpu_x86_64 = 5, 299 cpu_x86 = 7, 300 cpu_PowerPC = 8 301 }; 302 303 struct Info { 304 CPUs cpu; 305 uint32_t flags; 306 }; 307 308 To support reading and writing of this enumeration, you can define a 309 ScalarEnumerationTraits specialization on CPUs, which can then be used 310 as a field type: 311 312 .. code-block:: c++ 313 314 using llvm::yaml::ScalarEnumerationTraits; 315 using llvm::yaml::MappingTraits; 316 using llvm::yaml::IO; 317 318 template <> 319 struct ScalarEnumerationTraits<CPUs> { 320 static void enumeration(IO &io, CPUs &value) { 321 io.enumCase(value, "x86_64", cpu_x86_64); 322 io.enumCase(value, "x86", cpu_x86); 323 io.enumCase(value, "PowerPC", cpu_PowerPC); 324 } 325 }; 326 327 template <> 328 struct MappingTraits<Info> { 329 static void mapping(IO &io, Info &info) { 330 io.mapRequired("cpu", info.cpu); 331 io.mapOptional("flags", info.flags, 0); 332 } 333 }; 334 335 When reading YAML, if the string found does not match any of the strings 336 specified by enumCase() methods, an error is automatically generated. 337 When writing YAML, if the value being written does not match any of the values 338 specified by the enumCase() methods, a runtime assertion is triggered. 339 340 341 BitValue 342 -------- 343 Another common data structure in C++ is a field where each bit has a unique 344 meaning. This is often used in a "flags" field. YAML I/O has support for 345 converting such fields to a flow sequence. For instance suppose you 346 had the following bit flags defined: 347 348 .. code-block:: c++ 349 350 enum { 351 flagsPointy = 1 352 flagsHollow = 2 353 flagsFlat = 4 354 flagsRound = 8 355 }; 356 357 LLVM_YAML_STRONG_TYPEDEF(uint32_t, MyFlags) 358 359 To support reading and writing of MyFlags, you specialize ScalarBitSetTraits<> 360 on MyFlags and provide the bit values and their names. 361 362 .. code-block:: c++ 363 364 using llvm::yaml::ScalarBitSetTraits; 365 using llvm::yaml::MappingTraits; 366 using llvm::yaml::IO; 367 368 template <> 369 struct ScalarBitSetTraits<MyFlags> { 370 static void bitset(IO &io, MyFlags &value) { 371 io.bitSetCase(value, "hollow", flagHollow); 372 io.bitSetCase(value, "flat", flagFlat); 373 io.bitSetCase(value, "round", flagRound); 374 io.bitSetCase(value, "pointy", flagPointy); 375 } 376 }; 377 378 struct Info { 379 StringRef name; 380 MyFlags flags; 381 }; 382 383 template <> 384 struct MappingTraits<Info> { 385 static void mapping(IO &io, Info& info) { 386 io.mapRequired("name", info.name); 387 io.mapRequired("flags", info.flags); 388 } 389 }; 390 391 With the above, YAML I/O (when writing) will test mask each value in the 392 bitset trait against the flags field, and each that matches will 393 cause the corresponding string to be added to the flow sequence. The opposite 394 is done when reading and any unknown string values will result in a error. With 395 the above schema, a same valid YAML document is: 396 397 .. code-block:: yaml 398 399 name: Tom 400 flags: [ pointy, flat ] 401 402 Sometimes a "flags" field might contains an enumeration part 403 defined by a bit-mask. 404 405 .. code-block:: c++ 406 407 enum { 408 flagsFeatureA = 1, 409 flagsFeatureB = 2, 410 flagsFeatureC = 4, 411 412 flagsCPUMask = 24, 413 414 flagsCPU1 = 8, 415 flagsCPU2 = 16 416 }; 417 418 To support reading and writing such fields, you need to use the maskedBitSet() 419 method and provide the bit values, their names and the enumeration mask. 420 421 .. code-block:: c++ 422 423 template <> 424 struct ScalarBitSetTraits<MyFlags> { 425 static void bitset(IO &io, MyFlags &value) { 426 io.bitSetCase(value, "featureA", flagsFeatureA); 427 io.bitSetCase(value, "featureB", flagsFeatureB); 428 io.bitSetCase(value, "featureC", flagsFeatureC); 429 io.maskedBitSetCase(value, "CPU1", flagsCPU1, flagsCPUMask); 430 io.maskedBitSetCase(value, "CPU2", flagsCPU2, flagsCPUMask); 431 } 432 }; 433 434 YAML I/O (when writing) will apply the enumeration mask to the flags field, 435 and compare the result and values from the bitset. As in case of a regular 436 bitset, each that matches will cause the corresponding string to be added 437 to the flow sequence. 438 439 Custom Scalar 440 ------------- 441 Sometimes for readability a scalar needs to be formatted in a custom way. For 442 instance your internal data structure may use a integer for time (seconds since 443 some epoch), but in YAML it would be much nicer to express that integer in 444 some time format (e.g. 4-May-2012 10:30pm). YAML I/O has a way to support 445 custom formatting and parsing of scalar types by specializing ScalarTraits<> on 446 your data type. When writing, YAML I/O will provide the native type and 447 your specialization must create a temporary llvm::StringRef. When reading, 448 YAML I/O will provide an llvm::StringRef of scalar and your specialization 449 must convert that to your native data type. An outline of a custom scalar type 450 looks like: 451 452 .. code-block:: c++ 453 454 using llvm::yaml::ScalarTraits; 455 using llvm::yaml::IO; 456 457 template <> 458 struct ScalarTraits<MyCustomType> { 459 static void output(const MyCustomType &value, void*, 460 llvm::raw_ostream &out) { 461 out << value; // do custom formatting here 462 } 463 static StringRef input(StringRef scalar, void*, MyCustomType &value) { 464 // do custom parsing here. Return the empty string on success, 465 // or an error message on failure. 466 return StringRef(); 467 } 468 // Determine if this scalar needs quotes. 469 static bool mustQuote(StringRef) { return true; } 470 }; 471 472 Block Scalars 473 ------------- 474 475 YAML block scalars are string literals that are represented in YAML using the 476 literal block notation, just like the example shown below: 477 478 .. code-block:: yaml 479 480 text: | 481 First line 482 Second line 483 484 The YAML I/O library provides support for translating between YAML block scalars 485 and specific C++ types by allowing you to specialize BlockScalarTraits<> on 486 your data type. The library doesn't provide any built-in support for block 487 scalar I/O for types like std::string and llvm::StringRef as they are already 488 supported by YAML I/O and use the ordinary scalar notation by default. 489 490 BlockScalarTraits specializations are very similar to the 491 ScalarTraits specialization - YAML I/O will provide the native type and your 492 specialization must create a temporary llvm::StringRef when writing, and 493 it will also provide an llvm::StringRef that has the value of that block scalar 494 and your specialization must convert that to your native data type when reading. 495 An example of a custom type with an appropriate specialization of 496 BlockScalarTraits is shown below: 497 498 .. code-block:: c++ 499 500 using llvm::yaml::BlockScalarTraits; 501 using llvm::yaml::IO; 502 503 struct MyStringType { 504 std::string Str; 505 }; 506 507 template <> 508 struct BlockScalarTraits<MyStringType> { 509 static void output(const MyStringType &Value, void *Ctxt, 510 llvm::raw_ostream &OS) { 511 OS << Value.Str; 512 } 513 514 static StringRef input(StringRef Scalar, void *Ctxt, 515 MyStringType &Value) { 516 Value.Str = Scalar.str(); 517 return StringRef(); 518 } 519 }; 520 521 522 523 Mappings 524 ======== 525 526 To be translated to or from a YAML mapping for your type T you must specialize 527 llvm::yaml::MappingTraits on T and implement the "void mapping(IO &io, T&)" 528 method. If your native data structures use pointers to a class everywhere, 529 you can specialize on the class pointer. Examples: 530 531 .. code-block:: c++ 532 533 using llvm::yaml::MappingTraits; 534 using llvm::yaml::IO; 535 536 // Example of struct Foo which is used by value 537 template <> 538 struct MappingTraits<Foo> { 539 static void mapping(IO &io, Foo &foo) { 540 io.mapOptional("size", foo.size); 541 ... 542 } 543 }; 544 545 // Example of struct Bar which is natively always a pointer 546 template <> 547 struct MappingTraits<Bar*> { 548 static void mapping(IO &io, Bar *&bar) { 549 io.mapOptional("size", bar->size); 550 ... 551 } 552 }; 553 554 555 No Normalization 556 ---------------- 557 558 The mapping() method is responsible, if needed, for normalizing and 559 denormalizing. In a simple case where the native data structure requires no 560 normalization, the mapping method just uses mapOptional() or mapRequired() to 561 bind the struct's fields to YAML key names. For example: 562 563 .. code-block:: c++ 564 565 using llvm::yaml::MappingTraits; 566 using llvm::yaml::IO; 567 568 template <> 569 struct MappingTraits<Person> { 570 static void mapping(IO &io, Person &info) { 571 io.mapRequired("name", info.name); 572 io.mapOptional("hat-size", info.hatSize); 573 } 574 }; 575 576 577 Normalization 578 ---------------- 579 580 When [de]normalization is required, the mapping() method needs a way to access 581 normalized values as fields. To help with this, there is 582 a template MappingNormalization<> which you can then use to automatically 583 do the normalization and denormalization. The template is used to create 584 a local variable in your mapping() method which contains the normalized keys. 585 586 Suppose you have native data type 587 Polar which specifies a position in polar coordinates (distance, angle): 588 589 .. code-block:: c++ 590 591 struct Polar { 592 float distance; 593 float angle; 594 }; 595 596 but you've decided the normalized YAML for should be in x,y coordinates. That 597 is, you want the yaml to look like: 598 599 .. code-block:: yaml 600 601 x: 10.3 602 y: -4.7 603 604 You can support this by defining a MappingTraits that normalizes the polar 605 coordinates to x,y coordinates when writing YAML and denormalizes x,y 606 coordinates into polar when reading YAML. 607 608 .. code-block:: c++ 609 610 using llvm::yaml::MappingTraits; 611 using llvm::yaml::IO; 612 613 template <> 614 struct MappingTraits<Polar> { 615 616 class NormalizedPolar { 617 public: 618 NormalizedPolar(IO &io) 619 : x(0.0), y(0.0) { 620 } 621 NormalizedPolar(IO &, Polar &polar) 622 : x(polar.distance * cos(polar.angle)), 623 y(polar.distance * sin(polar.angle)) { 624 } 625 Polar denormalize(IO &) { 626 return Polar(sqrt(x*x+y*y), arctan(x,y)); 627 } 628 629 float x; 630 float y; 631 }; 632 633 static void mapping(IO &io, Polar &polar) { 634 MappingNormalization<NormalizedPolar, Polar> keys(io, polar); 635 636 io.mapRequired("x", keys->x); 637 io.mapRequired("y", keys->y); 638 } 639 }; 640 641 When writing YAML, the local variable "keys" will be a stack allocated 642 instance of NormalizedPolar, constructed from the supplied polar object which 643 initializes it x and y fields. The mapRequired() methods then write out the x 644 and y values as key/value pairs. 645 646 When reading YAML, the local variable "keys" will be a stack allocated instance 647 of NormalizedPolar, constructed by the empty constructor. The mapRequired 648 methods will find the matching key in the YAML document and fill in the x and y 649 fields of the NormalizedPolar object keys. At the end of the mapping() method 650 when the local keys variable goes out of scope, the denormalize() method will 651 automatically be called to convert the read values back to polar coordinates, 652 and then assigned back to the second parameter to mapping(). 653 654 In some cases, the normalized class may be a subclass of the native type and 655 could be returned by the denormalize() method, except that the temporary 656 normalized instance is stack allocated. In these cases, the utility template 657 MappingNormalizationHeap<> can be used instead. It just like 658 MappingNormalization<> except that it heap allocates the normalized object 659 when reading YAML. It never destroys the normalized object. The denormalize() 660 method can this return "this". 661 662 663 Default values 664 -------------- 665 Within a mapping() method, calls to io.mapRequired() mean that that key is 666 required to exist when parsing YAML documents, otherwise YAML I/O will issue an 667 error. 668 669 On the other hand, keys registered with io.mapOptional() are allowed to not 670 exist in the YAML document being read. So what value is put in the field 671 for those optional keys? 672 There are two steps to how those optional fields are filled in. First, the 673 second parameter to the mapping() method is a reference to a native class. That 674 native class must have a default constructor. Whatever value the default 675 constructor initially sets for an optional field will be that field's value. 676 Second, the mapOptional() method has an optional third parameter. If provided 677 it is the value that mapOptional() should set that field to if the YAML document 678 does not have that key. 679 680 There is one important difference between those two ways (default constructor 681 and third parameter to mapOptional). When YAML I/O generates a YAML document, 682 if the mapOptional() third parameter is used, if the actual value being written 683 is the same as (using ==) the default value, then that key/value is not written. 684 685 686 Order of Keys 687 -------------- 688 689 When writing out a YAML document, the keys are written in the order that the 690 calls to mapRequired()/mapOptional() are made in the mapping() method. This 691 gives you a chance to write the fields in an order that a human reader of 692 the YAML document would find natural. This may be different that the order 693 of the fields in the native class. 694 695 When reading in a YAML document, the keys in the document can be in any order, 696 but they are processed in the order that the calls to mapRequired()/mapOptional() 697 are made in the mapping() method. That enables some interesting 698 functionality. For instance, if the first field bound is the cpu and the second 699 field bound is flags, and the flags are cpu specific, you can programmatically 700 switch how the flags are converted to and from YAML based on the cpu. 701 This works for both reading and writing. For example: 702 703 .. code-block:: c++ 704 705 using llvm::yaml::MappingTraits; 706 using llvm::yaml::IO; 707 708 struct Info { 709 CPUs cpu; 710 uint32_t flags; 711 }; 712 713 template <> 714 struct MappingTraits<Info> { 715 static void mapping(IO &io, Info &info) { 716 io.mapRequired("cpu", info.cpu); 717 // flags must come after cpu for this to work when reading yaml 718 if ( info.cpu == cpu_x86_64 ) 719 io.mapRequired("flags", *(My86_64Flags*)info.flags); 720 else 721 io.mapRequired("flags", *(My86Flags*)info.flags); 722 } 723 }; 724 725 726 Tags 727 ---- 728 729 The YAML syntax supports tags as a way to specify the type of a node before 730 it is parsed. This allows dynamic types of nodes. But the YAML I/O model uses 731 static typing, so there are limits to how you can use tags with the YAML I/O 732 model. Recently, we added support to YAML I/O for checking/setting the optional 733 tag on a map. Using this functionality it is even possbile to support different 734 mappings, as long as they are convertable. 735 736 To check a tag, inside your mapping() method you can use io.mapTag() to specify 737 what the tag should be. This will also add that tag when writing yaml. 738 739 Validation 740 ---------- 741 742 Sometimes in a yaml map, each key/value pair is valid, but the combination is 743 not. This is similar to something having no syntax errors, but still having 744 semantic errors. To support semantic level checking, YAML I/O allows 745 an optional ``validate()`` method in a MappingTraits template specialization. 746 747 When parsing yaml, the ``validate()`` method is call *after* all key/values in 748 the map have been processed. Any error message returned by the ``validate()`` 749 method during input will be printed just a like a syntax error would be printed. 750 When writing yaml, the ``validate()`` method is called *before* the yaml 751 key/values are written. Any error during output will trigger an ``assert()`` 752 because it is a programming error to have invalid struct values. 753 754 755 .. code-block:: c++ 756 757 using llvm::yaml::MappingTraits; 758 using llvm::yaml::IO; 759 760 struct Stuff { 761 ... 762 }; 763 764 template <> 765 struct MappingTraits<Stuff> { 766 static void mapping(IO &io, Stuff &stuff) { 767 ... 768 } 769 static StringRef validate(IO &io, Stuff &stuff) { 770 // Look at all fields in 'stuff' and if there 771 // are any bad values return a string describing 772 // the error. Otherwise return an empty string. 773 return StringRef(); 774 } 775 }; 776 777 Flow Mapping 778 ------------ 779 A YAML "flow mapping" is a mapping that uses the inline notation 780 (e.g { x: 1, y: 0 } ) when written to YAML. To specify that a type should be 781 written in YAML using flow mapping, your MappingTraits specialization should 782 add "static const bool flow = true;". For instance: 783 784 .. code-block:: c++ 785 786 using llvm::yaml::MappingTraits; 787 using llvm::yaml::IO; 788 789 struct Stuff { 790 ... 791 }; 792 793 template <> 794 struct MappingTraits<Stuff> { 795 static void mapping(IO &io, Stuff &stuff) { 796 ... 797 } 798 799 static const bool flow = true; 800 } 801 802 Flow mappings are subject to line wrapping according to the Output object 803 configuration. 804 805 Sequence 806 ======== 807 808 To be translated to or from a YAML sequence for your type T you must specialize 809 llvm::yaml::SequenceTraits on T and implement two methods: 810 ``size_t size(IO &io, T&)`` and 811 ``T::value_type& element(IO &io, T&, size_t indx)``. For example: 812 813 .. code-block:: c++ 814 815 template <> 816 struct SequenceTraits<MySeq> { 817 static size_t size(IO &io, MySeq &list) { ... } 818 static MySeqEl &element(IO &io, MySeq &list, size_t index) { ... } 819 }; 820 821 The size() method returns how many elements are currently in your sequence. 822 The element() method returns a reference to the i'th element in the sequence. 823 When parsing YAML, the element() method may be called with an index one bigger 824 than the current size. Your element() method should allocate space for one 825 more element (using default constructor if element is a C++ object) and returns 826 a reference to that new allocated space. 827 828 829 Flow Sequence 830 ------------- 831 A YAML "flow sequence" is a sequence that when written to YAML it uses the 832 inline notation (e.g [ foo, bar ] ). To specify that a sequence type should 833 be written in YAML as a flow sequence, your SequenceTraits specialization should 834 add "static const bool flow = true;". For instance: 835 836 .. code-block:: c++ 837 838 template <> 839 struct SequenceTraits<MyList> { 840 static size_t size(IO &io, MyList &list) { ... } 841 static MyListEl &element(IO &io, MyList &list, size_t index) { ... } 842 843 // The existence of this member causes YAML I/O to use a flow sequence 844 static const bool flow = true; 845 }; 846 847 With the above, if you used MyList as the data type in your native data 848 structures, then when converted to YAML, a flow sequence of integers 849 will be used (e.g. [ 10, -3, 4 ]). 850 851 Flow sequences are subject to line wrapping according to the Output object 852 configuration. 853 854 Utility Macros 855 -------------- 856 Since a common source of sequences is std::vector<>, YAML I/O provides macros: 857 LLVM_YAML_IS_SEQUENCE_VECTOR() and LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR() which 858 can be used to easily specify SequenceTraits<> on a std::vector type. YAML 859 I/O does not partial specialize SequenceTraits on std::vector<> because that 860 would force all vectors to be sequences. An example use of the macros: 861 862 .. code-block:: c++ 863 864 std::vector<MyType1>; 865 std::vector<MyType2>; 866 LLVM_YAML_IS_SEQUENCE_VECTOR(MyType1) 867 LLVM_YAML_IS_FLOW_SEQUENCE_VECTOR(MyType2) 868 869 870 871 Document List 872 ============= 873 874 YAML allows you to define multiple "documents" in a single YAML file. Each 875 new document starts with a left aligned "---" token. The end of all documents 876 is denoted with a left aligned "..." token. Many users of YAML will never 877 have need for multiple documents. The top level node in their YAML schema 878 will be a mapping or sequence. For those cases, the following is not needed. 879 But for cases where you do want multiple documents, you can specify a 880 trait for you document list type. The trait has the same methods as 881 SequenceTraits but is named DocumentListTraits. For example: 882 883 .. code-block:: c++ 884 885 template <> 886 struct DocumentListTraits<MyDocList> { 887 static size_t size(IO &io, MyDocList &list) { ... } 888 static MyDocType element(IO &io, MyDocList &list, size_t index) { ... } 889 }; 890 891 892 User Context Data 893 ================= 894 When an llvm::yaml::Input or llvm::yaml::Output object is created their 895 constructors take an optional "context" parameter. This is a pointer to 896 whatever state information you might need. 897 898 For instance, in a previous example we showed how the conversion type for a 899 flags field could be determined at runtime based on the value of another field 900 in the mapping. But what if an inner mapping needs to know some field value 901 of an outer mapping? That is where the "context" parameter comes in. You 902 can set values in the context in the outer map's mapping() method and 903 retrieve those values in the inner map's mapping() method. 904 905 The context value is just a void*. All your traits which use the context 906 and operate on your native data types, need to agree what the context value 907 actually is. It could be a pointer to an object or struct which your various 908 traits use to shared context sensitive information. 909 910 911 Output 912 ====== 913 914 The llvm::yaml::Output class is used to generate a YAML document from your 915 in-memory data structures, using traits defined on your data types. 916 To instantiate an Output object you need an llvm::raw_ostream, an optional 917 context pointer and an optional wrapping column: 918 919 .. code-block:: c++ 920 921 class Output : public IO { 922 public: 923 Output(llvm::raw_ostream &, void *context = NULL, int WrapColumn = 70); 924 925 Once you have an Output object, you can use the C++ stream operator on it 926 to write your native data as YAML. One thing to recall is that a YAML file 927 can contain multiple "documents". If the top level data structure you are 928 streaming as YAML is a mapping, scalar, or sequence, then Output assumes you 929 are generating one document and wraps the mapping output 930 with "``---``" and trailing "``...``". 931 932 The WrapColumn parameter will cause the flow mappings and sequences to 933 line-wrap when they go over the supplied column. Pass 0 to completely 934 suppress the wrapping. 935 936 .. code-block:: c++ 937 938 using llvm::yaml::Output; 939 940 void dumpMyMapDoc(const MyMapType &info) { 941 Output yout(llvm::outs()); 942 yout << info; 943 } 944 945 The above could produce output like: 946 947 .. code-block:: yaml 948 949 --- 950 name: Tom 951 hat-size: 7 952 ... 953 954 On the other hand, if the top level data structure you are streaming as YAML 955 has a DocumentListTraits specialization, then Output walks through each element 956 of your DocumentList and generates a "---" before the start of each element 957 and ends with a "...". 958 959 .. code-block:: c++ 960 961 using llvm::yaml::Output; 962 963 void dumpMyMapDoc(const MyDocListType &docList) { 964 Output yout(llvm::outs()); 965 yout << docList; 966 } 967 968 The above could produce output like: 969 970 .. code-block:: yaml 971 972 --- 973 name: Tom 974 hat-size: 7 975 --- 976 name: Tom 977 shoe-size: 11 978 ... 979 980 Input 981 ===== 982 983 The llvm::yaml::Input class is used to parse YAML document(s) into your native 984 data structures. To instantiate an Input 985 object you need a StringRef to the entire YAML file, and optionally a context 986 pointer: 987 988 .. code-block:: c++ 989 990 class Input : public IO { 991 public: 992 Input(StringRef inputContent, void *context=NULL); 993 994 Once you have an Input object, you can use the C++ stream operator to read 995 the document(s). If you expect there might be multiple YAML documents in 996 one file, you'll need to specialize DocumentListTraits on a list of your 997 document type and stream in that document list type. Otherwise you can 998 just stream in the document type. Also, you can check if there was 999 any syntax errors in the YAML be calling the error() method on the Input 1000 object. For example: 1001 1002 .. code-block:: c++ 1003 1004 // Reading a single document 1005 using llvm::yaml::Input; 1006 1007 Input yin(mb.getBuffer()); 1008 1009 // Parse the YAML file 1010 MyDocType theDoc; 1011 yin >> theDoc; 1012 1013 // Check for error 1014 if ( yin.error() ) 1015 return; 1016 1017 1018 .. code-block:: c++ 1019 1020 // Reading multiple documents in one file 1021 using llvm::yaml::Input; 1022 1023 LLVM_YAML_IS_DOCUMENT_LIST_VECTOR(std::vector<MyDocType>) 1024 1025 Input yin(mb.getBuffer()); 1026 1027 // Parse the YAML file 1028 std::vector<MyDocType> theDocList; 1029 yin >> theDocList; 1030 1031 // Check for error 1032 if ( yin.error() ) 1033 return; 1034 1035 1036