1 <html><head><title>The design of toybox</title></head> 2 <!--#include file="header.html" --> 3 4 <a name="goals"><b><h2><a href="#goals">Design goals</a></h2></b> 5 6 <p>Toybox should be simple, small, fast, and full featured. In that order.</p> 7 8 <p>When these goals need to be balanced off against each other, keeping the code 9 as simple as it can be to do what it does is the most important (and hardest) 10 goal. Then keeping it small is slightly more important than making it fast. 11 Features are the reason we write code in the first place but this has all 12 been implemented before so if we can't do a better job why bother?</p> 13 14 <p>It should be possible to get 80% of the way to each goal 15 before they really start to fight. Here they are in reverse order 16 of importance:</p> 17 18 <b><h3>Features</h3></b> 19 20 <p>The hard part is deciding what NOT to include. 21 A project without boundaries will bloat itself 22 to death. One of the hardest but most important things a project must 23 do is draw a line and say "no, this is somebody else's problem, not 24 something we should do."</p> 25 26 <p>Some things are simply outside the scope of the project: even though 27 posix defines commands for compiling and linking, we're not going to include 28 a compiler or linker (and support for a potentially infinite number of hardware 29 targets). And until somebody comes up with a ~30k ssh implementation (with 30 a crypto algorithm that won't need replacing every 5 years), we're 31 going to point you at dropbear or bearssl.</p> 32 33 <p>The <a href=roadmap.html>roadmap</a> has the list of features we're 34 trying to implement, and the reasons why we decided to include those 35 features. After the 1.0 release some of that material may get moved here, 36 but for now it needs its own page.</p> 37 38 <p>There are potential features (such as a screen/tmux implementation) 39 that might be worth adding after 1.0, in part because they could share 40 infrastructure with things like "less" and "vi" so might be less work for 41 us to do than an external from-scratch implementation. But for now, major 42 new features outside posix, android's existing commands, and the needs of 43 development systems, are a distraction from the 1.0 release.</p> 44 45 <b><h3>Speed</h3></b> 46 47 <p>It's easy to say lots about optimizing for speed (which is why this section 48 is so long), but at the same time it's the optimization we care the least about. 49 The essence of speed is being as efficient as possible, which means doing as 50 little work as possible. A design that's small and simple gets you 90% of the 51 way there, and most of the rest is either fine-tuning or more trouble than 52 it's worth (and often actually counterproductive). Still, here's some 53 advice:</p> 54 55 <p>First, understand the darn problem you're trying to solve. You'd think 56 I wouldn't have to say this, but I do. Trying to find a faster sorting 57 algorithm is no substitute for figuring out a way to skip the sorting step 58 entirely. The fastest way to do anything is not to have to do it at all, 59 and _all_ optimization boils down to avoiding unnecessary work.</p> 60 61 <p>Speed is easy to measure; there are dozens of profiling tools for Linux 62 (although personally I find the "time" command a good starting place). 63 Don't waste too much time trying to optimize something you can't measure, 64 and there's no much point speeding up things you don't spend much time doing 65 anyway.</p> 66 67 <p>Understand the difference between throughput and latency. Faster 68 processors improve throughput, but don't always do much for latency. 69 After 30 years of Moore's Law, most of the remaining problems are latency, 70 not throughput. (There are of course a few exceptions, like data compression 71 code, encryption, rsync...) Worry about throughput inside long-running 72 loops, and worry about latency everywhere else. (And don't worry too much 73 about avoiding system calls or function calls or anything else in the name 74 of speed unless you are in the middle of a tight loop that's you've already 75 proven isn't running fast enough.)</p> 76 77 <p>"Locality of reference" is generally nice, in all sorts of contexts. 78 It's obvious that waiting for disk access is 1000x slower than doing stuff in 79 RAM (and making the disk seek is 10x slower than sequential reads/writes), 80 but it's just as true that a loop which stays in L1 cache is many times faster 81 than a loop that has to wait for a DRAM fetch on each iteration. Don't worry 82 about whether "&" is faster than "%" until your executable loop stays in L1 83 cache and the data access is fetching cache lines intelligently. (To 84 understand DRAM, L1, and L2 cache, read Hannibal's marvelous ram guide at Ars 85 Technica: 86 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html>part one</a>, 87 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part2-1.html>part two</a>, 88 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>part three</a>, 89 plus this 90 <a href=http://arstechnica.com/articles/paedia/cpu/caching.ars/1>article on 91 cacheing</a>, and this one on 92 <a href=http://arstechnica.com/articles/paedia/cpu/bandwidth-latency.ars>bandwidth 93 and latency</a>. 94 And there's <a href=http://arstechnica.com/paedia/index.html>more where that came from</a>.) 95 Running out of L1 cache can execute one instruction per clock cycle, going 96 to L2 cache costs a dozen or so clock cycles, and waiting for a worst case dram 97 fetch (round trip latency with a bank switch) can cost thousands of 98 clock cycles. (Historically, this disparity has gotten worse with time, 99 just like the speed hit for swapping to disk. These days, a _big_ L1 cache 100 is 128k and a big L2 cache is a couple of megabytes. A cheap low-power 101 embedded processor may have 8k of L1 cache and no L2.)</p> 102 103 <p>Learn how <a href=http://nommu.org/memory-faq.txt>virtual memory and 104 memory managment units work</a>. Don't touch 105 memory you don't have to. Even just reading memory evicts stuff from L1 and L2 106 cache, which may have to be read back in later. Writing memory can force the 107 operating system to break copy-on-write, which allocates more memory. (The 108 memory returned by malloc() is only a virtual allocation, filled with lots of 109 copy-on-write mappings of the zero page. Actual physical pages get allocated 110 when the copy-on-write gets broken by writing to the virtual page. This 111 is why checking the return value of malloc() isn't very useful anymore, it 112 only detects running out of virtual memory, not physical memory. Unless 113 you're using a <a href=http://nommu.org>NOMMU system</a>, where all bets are off.)</p> 114 115 <p>Don't think that just because you don't have a swap file the system can't 116 start swap thrashing: any file backed page (ala mmap) can be evicted, and 117 there's a reason all running programs require an executable file (they're 118 mmaped, and can be flushed back to disk when memory is short). And long 119 before that, disk cache gets reclaimed and has to be read back in. When the 120 operating system really can't free up any more pages it triggers the out of 121 memory killer to free up pages by killing processes (the alternative is the 122 entire OS freezing solid). Modern operating systems seldom run out of 123 memory gracefully.</p> 124 125 <p>Also, it's better to be simple than clever. Many people think that mmap() 126 is faster than read() because it avoids a copy, but twiddling with the memory 127 management is itself slow, and can cause unnecessary CPU cache flushes. And 128 if a read faults in dozens of pages sequentially, but your mmap iterates 129 backwards through a file (causing lots of seeks, each of which your program 130 blocks waiting for), the read can be many times faster. On the other hand, the 131 mmap can sometimes use less memory, since the memory provided by mmap 132 comes from the page cache (allocated anyway), and it can be faster if you're 133 doing a lot of different updates to the same area. The moral? Measure, then 134 try to speed things up, and measure again to confirm it actually _did_ speed 135 things up rather than made them worse. (And understanding what's really going 136 on underneath is a big help to making it happen faster.)</p> 137 138 <p>In general, being simple is better than being clever. Optimization 139 strategies change with time. For example, decades ago precalculating a table 140 of results (for things like isdigit() or cosine(int degrees)) was clearly 141 faster because processors were so slow. Then processors got faster and grew 142 math coprocessors, and calculating the value each time became faster than 143 the table lookup (because the calculation fit in L1 cache but the lookup 144 had to go out to DRAM). Then cache sizes got bigger (the Pentium M has 145 2 megabytes of L2 cache) and the table fit in cache, so the table became 146 fast again... Predicting how changes in hardware will affect your algorithm 147 is difficult, and using ten year old optimization advice and produce 148 laughably bad results. But being simple and efficient is always going to 149 give at least a reasonable result.</p> 150 151 <p>The famous quote from Ken Thompson, "When in doubt, use brute force", 152 applies to toybox. Do the simple thing first, do as little of it as possible, 153 and make sure it's right. You can always speed it up later.</p> 154 155 <b><h3>Size</h3></b> 156 <p>Again, being simple gives you most of this. An algorithm that does less work 157 is generally smaller. Understand the problem, treat size as a cost, and 158 get a good bang for the byte.</p> 159 160 <p>Understand the difference between binary size, heap size, and stack size. 161 Your binary is the executable file on disk, your heap is where malloc() memory 162 lives, and your stack is where local variables (and function call return 163 addresses) live. Optimizing for binary size is generally good: executing 164 fewer instructions makes your program run faster (and fits more of it in 165 cache). On embedded systems, binary size is especially precious because 166 flash is expensive (and its successor, MRAM, even more so). Small stack size 167 is important for nommu systems because they have to preallocate their stack 168 and can't make it bigger via page fault. And everybody likes a small heap.</p> 169 170 <p>Measure the right things. Especially with modern optimizers, expecting 171 something to be smaller is no guarantee it will be after the compiler's done 172 with it. Binary size isn't the most accurate indicator of the impact of a 173 given change, because lots of things get combined and rounded during 174 compilation and linking. Matt Mackall's bloat-o-meter is a python script 175 which compares two versions of a program, and shows size changes in each 176 symbol (using the "nm" command behind the scenes). To use this, run 177 "make baseline" to build a baseline version to compare against, and 178 then "make bloatometer" to compare that baseline version against the current 179 code.</p> 180 181 <p>Avoid special cases. Whenever you see similar chunks of code in more than 182 one place, it might be possible to combine them and have the users call shared 183 code. (This is the most commonly cited trick, which doesn't make it easy. If 184 seeing two lines of code do the same thing makes you slightly uncomfortable, 185 you've got the right mindset.)</p> 186 187 <p>Some specific advice: Using a char in place of an int when doing math 188 produces significantly larger code on some platforms (notably arm), 189 because each time the compiler has to emit code to convert it to int, do the 190 math, and convert it back. Bitfields have this problem on most platforms. 191 Because of this, using char to index a for() loop is probably not a net win, 192 although using char (or a bitfield) to store a value in a structure that's 193 repeated hundreds of times can be a good tradeoff of binary size for heap 194 space.</p> 195 196 <b><h3>Simplicity</h3></b> 197 198 <p>Complexity is a cost, just like code size or runtime speed. Treat it as 199 a cost, and spend your complexity budget wisely. (Sometimes this means you 200 can't afford a feature because it complicates the code too much to be 201 worth it.)</p> 202 203 <p>Simplicity has lots of benefits. Simple code is easy to maintain, easy to 204 port to new processors, easy to audit for security holes, and easy to 205 understand.</p> 206 207 <p>Simplicity itself can have subtle non-obvious aspects requiring a tradeoff 208 between one kind of simplicity and another: simple for the computer to 209 execute and simple for a human reader to understand aren't always the 210 same thing. A compact and clever algorithm that does very little work may 211 not be as easy to explain or understand as a larger more explicit version 212 requiring more code, memory, and CPU time. When balancing these, err on the 213 side of doing less work, but add comments describing how you 214 could be more explicit.</p> 215 216 <p>In general, comments are not a substitute for good code (or well chosen 217 variable or function names). Commenting "x += y;" with "/* add y to x */" 218 can actually detract from the program's readability. If you need to describe 219 what the code is doing (rather than _why_ it's doing it), that means the 220 code itself isn't very clear.</p> 221 222 <p>Environmental dependencies are another type of complexity, so needing other 223 packages to build or run is a big downside. For example, we don't use curses 224 when we can simply output ansi escape sequences and trust all terminal 225 programs written in the past 30 years to be able to support them. Regularly 226 testing that we work with C libraries which support static linking (musl does, 227 glibc doesn't) is another way to be self-contained with known boundaries: 228 it doesn't have to be the only way to build the project, but should be regularly 229 tested and supported.</p> 230 231 <p>Prioritizing simplicity tends to serve our other goals: simplifying code 232 generally reduces its size (both in terms of binary size and runtime memory 233 usage), and avoiding unnecessary work makes code run faster. Smaller code 234 also tends to run faster on modern hardware due to CPU cacheing: fitting your 235 code into L1 cache is great, and staying in L2 cache is still pretty good.</p> 236 237 <p>But a simple implementation is not always the smallest or fastest, and 238 balancing simplicity vs the other goals can be difficult. For example, the 239 atolx_range() function in lib/lib.c always uses the 64 bit "long long" type, 240 which produces larger and slower code on 32 bit platforms and 241 often assigned into smaller interger types. Although libc has parallel 242 implementations for different data sizes (atoi, atol, atoll) we chose a 243 common codepath which can cover all cases (every user goes through the 244 same codepath, with the maximum amount of testing and minimum and avoids 245 surprising variations in behavior).</p> 246 247 <p>On the other hand, the "tail" command has two codepaths, one for seekable 248 files and one for nonseekable files. Although the nonseekable case can handle 249 all inputs (and is required when input comes from a pipe or similar, so cannot 250 be removed), reading through multiple gigabytes of data to reach the end of 251 seekable files was both a common case and hugely penalized by a nonseekable 252 approach (half-minute wait vs instant results). This is one example 253 where performance did outweigh simplicity of implementation.</p> 254 255 <p><a href=http://www.joelonsoftware.com/articles/fog0000000069.html>Joel 256 Spolsky argues against throwing code out and starting over</a>, and he has 257 good points: an existing debugged codebase contains a huge amount of baked 258 in knowledge about strange real-world use cases that the designers didn't 259 know about until users hit the bugs, and most of this knowledge is never 260 explicitly stated anywhere except in the source code.</p> 261 262 <p>That said, the Mythical Man-Month's "build one to throw away" advice points 263 out that until you've solved the problem you don't properly understand it, and 264 about the time you finish your first version is when you've finally figured 265 out what you _should_ have done. (The corrolary is that if you build one 266 expecting to throw it away, you'll actually wind up throwing away two. You 267 don't understand the problem until you _have_ solved it.)</p> 268 269 <p>Joel is talking about what closed source software can afford to do: Code 270 that works and has been paid for is a corporate asset not lightly abandoned. 271 Open source software can afford to re-implement code that works, over and 272 over from scratch, for incremental gains. Before toybox, the unix command line 273 has already been reimplemented from scratch several times (the 274 original AT&T Unix command line in assembly and then in C, the BSD 275 versions, Coherent was the first full from-scratch Unix clone in 1980, 276 Minix was another clone which Linux was inspired by and developed under, 277 the GNU tools were yet another rewrite intended for use in the stillborn 278 "Hurd" project, BusyBox was still another rewrite, and more versions 279 were written in Plan 9, uclinux, klibc, sash, sbase, s6, and of course 280 android toolbox...). But maybe toybox can do a better job. :)</p> 281 282 <p>As Antoine de St. Exupery (author of "The Little Prince" and an early 283 aircraft designer) said, "Perfection is achieved, not when there 284 is nothing left to add, but when there is nothing left to take away." 285 And Ken Thompson (creator of Unix) said "One of my most productive 286 days was throwing away 1000 lines of code." It's always possible to 287 come up with a better way to do it.</p> 288 289 <p>P.S. How could I resist linking to an article about 290 <a href=http://blog.outer-court.com/archive/2005-08-24-n14.html>why 291 programmers should strive to be lazy and dumb</a>?</p> 292 293 <a name="portability"><b><h2><a href="#portability">Portability issues</a></h2></b> 294 295 <b><h3>Platforms</h3></b> 296 <p>Toybox should run on Android (all commands with musl-libc, as large a subset 297 as practical with bionic), and every other hardware platform Linux runs on. 298 Other posix/susv4 environments (perhaps MacOS X or newlib+libgloss) are vaguely 299 interesting but only if they're easy to support; I'm not going to spend much 300 effort on them.</p> 301 302 <p>I don't do windows.</p> 303 304 <p>We depend on C99 and posix-2008 libc features such as the openat() family of 305 functions. We also assume certain "modern" linux kernel behavior such 306 as large environment sizes (linux commit b6a2fea39318, went into 2.6.22 307 released July 2007). In theory this shouldn't prevent us from working on 308 older kernels or other implementations (ala BSD), but we don't police their 309 corner cases.</p> 310 311 <b><h3>32/64 bit</h3></b> 312 <p>Toybox should work on both 32 bit and 64 bit systems. 64 bit desktop 313 hardware went mainstream in 2005 and was essentially ubiquitous 314 by the end of the decade, but 32 bit hardware will continue to be important 315 in embedded devices for several more years.</p> 316 317 <p>Toybox relies on the fact that on any Unix-like platform, pointer and long 318 are always the same size (on both 32 and 64 bit). Pointer and int are _not_ 319 the same size on 64 bit systems, but pointer and long are.</p> 320 321 <p>This is guaranteed by the LP64 memory model, a Unix standard (which Linux 322 and MacOS X both implement, and which modern 64 bit processors such as 323 x86-64 were <a href=http://www.pagetable.com/?p=6>designed for</a>). See 324 <a href=http://www.unix.org/whitepapers/64bit.html>the LP64 standard</a> and 325 <a href=http://www.unix.org/version2/whatsnew/lp64_wp.html>the LP64 326 rationale</a> for details.</p> 327 328 <p>Note that Windows doesn't work like this, and I don't care. 329 <a href=http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx>The 330 insane legacy reasons why this is broken on Windows are explained here.</a></p> 331 332 <b><h3>Signedness of char</h3></b> 333 <p>On platforms like x86, variables of type char default to unsigned. On 334 platforms like arm, char defaults to signed. This difference can lead to 335 subtle portability bugs, and to avoid them we specify which one we want by 336 feeding the compiler -funsigned-char.</p> 337 338 <p>The reason to pick "unsigned" is that way we're 8-bit clean by default.</p> 339 340 <p><h3>Error messages and internationalization:</h3></p> 341 342 <p>Error messages are extremely terse not just to save bytes, but because we 343 don't use any sort of _("string") translation infrastructure. (We're not 344 translating the command names themselves, so we must expect a minimum amount of 345 english knowledge from our users, but let's keep it to a minimum.)</p> 346 347 <p>Thus "bad -A '%c'" is 348 preferable to "Unrecognized address base '%c'", because a non-english speaker 349 can see that -A was the problem (giving back the command line argument they 350 supplied). A user with a ~20 word english vocabulary is 351 more likely to know (or guess) "bad" than the longer message, and you can 352 use "bad" in place of "invalid", "inappropriate", "unrecognized"... 353 Similarly when atolx_range() complains about range constraints with 354 "4 < 17" or "12 > 5", it's intentional: those don't need to be translated.</p> 355 356 <p>The strerror() messages produced by perror_exit() and friends should be 357 localized by libc, and our error functions also prepend the command name 358 (which non-english speakers can presumably recognize already). Keep the 359 explanation in between to a minimum, and where possible feed back the values 360 they passed in to identify _what_ we couldn't process. 361 If you say perror_exit("setsockopt"), you've identified the action you 362 were trying to take, and the perror gives a translated error message (from libc) 363 explaining _why_ it couldn't do it, so you probably don't need to add english 364 words like "failed" or "couldn't assign".</p> 365 366 <p>All commands should be 8-bit clean, with explicit 367 <a href=http://yarchive.net/comp/linux/utf8.html>UTF-8</a> support where 368 necessary. Assume all input data might be utf8, and at least preserve 369 it and pass it through. (For this reason, our build is -funsigned-char on 370 all architectures; "char" is unsigned unless you stick "signed" in front 371 of it.)</p> 372 373 <p>Locale support isn't currently a goal; that's a presentation layer issue 374 (I.E. a GUI problem).</p> 375 376 <p><h3>Shared Libraries</h3></p> 377 378 <p>Toybox's policy on shared libraries is that they should never be 379 required, but can optionally be used to improve performance.</p> 380 381 <p>Toybox should provide the command line utilities for 382 <a href=roadmap.html#dev_env>self-hosting development envirionments</a>, 383 and an easy way to set up "hermetic builds" (I.E. builds which provide 384 their own dependencies, isolating the build logic from host command version 385 skew with a simple known build environment). In both cases, external 386 dependencies defeat the purpose.</p> 387 388 <p>This means toybox should provide full functionality without relying 389 on any external dependencies (other than libc). But toybox may optionally use 390 libraries such as zlib and openssl to improve performance for things like 391 deflate and sha1sum, which lets the corresponding built-in implementations 392 be simple (and thus slow). But the built-in implementations need to exist and 393 work.</p> 394 395 <p>(This is why we use an external https wrapper program, because depending on 396 openssl or similar to be linked in would change the behavior of toybox.)</p> 397 398 <a name="codestyle" /> 399 <h2>Coding style</h2> 400 401 <p>The real coding style holy wars are over things that don't matter 402 (whitespace, indentation, curly bracket placement...) and thus have no 403 obviously correct answer. As in academia, "the fighting is so vicious because 404 the stakes are so small". That said, being consistent makes the code readable, 405 so here's how to make toybox code look like other toybox code.</p> 406 407 <p>Toybox source uses two spaces per indentation level, and wraps at 80 408 columns. (Indentation of continuation lines is awkward no matter what 409 you do, sometimes two spaces looks better, sometimes indenting to the 410 contents of a parentheses looks better.)</p> 411 412 <p>I'm aware this indentation style creeps some people out, so here's 413 the sed invocation to convert groups of two leading spaces to tabs:</p> 414 <blockquote><pre> 415 sed -i ':loop;s/^\( *\) /\1\t/;t loop' filename 416 </pre></blockquote> 417 418 <p>And here's the sed invocation to convert leading tabs to two spaces each:</p> 419 <blockquote><pre> 420 sed -i ':loop;s/^\( *\)\t/\1 /;t loop' filename 421 </pre></blockquote> 422 423 <p>There's a space after C flow control statements that look like functions, so 424 "if (blah)" instead of "if(blah)". (Note that sizeof is actually an 425 operator, so we don't give it a space for the same reason ++ doesn't get 426 one. Yeah, it doesn't need the parentheses either, but it gets them. 427 These rules are mostly to make the code look consistent, and thus easier 428 to read.) We also put a space around assignment operators (on both sides), 429 so "int x = 0;".</p> 430 431 <p>Blank lines (vertical whitespace) go between thoughts. "We were doing that, 432 now we're doing this." (Not a hard and fast rule about _where_ it goes, 433 but there should be some for the same reason writing has paragraph breaks.)</p> 434 435 <p>Variable declarations go at the start of blocks, with a blank line between 436 them and other code. Yes, c99 allows you to put them anywhere, but they're 437 harder to find if you do that. If there's a large enough distance between 438 the declaration and the code using it to make you uncomfortable, maybe the 439 function's too big, or is there an if statement or something you can 440 use as an excuse to start a new closer block?</p> 441 442 <p>If statments with a single line body go on the same line if the result 443 fits in 80 columns, on a second line if it doesn't. We usually only use 444 curly brackets if we need to, either because the body is multiple lines or 445 because we need to distinguish which if an else binds to. Curly brackets go 446 on the same line as the test/loop statement. The exception to both cases is 447 if the test part of an if statement is long enough to split into multiple 448 lines, then we put the curly bracket on its own line afterwards (so it doesn't 449 get lost in the multple line variably indented mess), and we put it there 450 even if it's only grouping one line (because the indentation level is not 451 providing clear information in that case).</p> 452 453 <p>I.E.</p> 454 455 <blockquote> 456 <pre> 457 if (thingy) thingy; 458 else thingy; 459 460 if (thingy) { 461 thingy; 462 thingy; 463 } else thingy; 464 465 if (blah blah blah... 466 && blah blah blah) 467 { 468 thingy; 469 } 470 </pre></blockquote> 471 472 <p>Gotos are allowed for error handling, and for breaking out of 473 nested loops. In general, a goto should only jump forward (not back), and 474 should either jump to the end of an outer loop, or to error handling code 475 at the end of the function. Goto labels are never indented: they override the 476 block structure of the file. Putting them at the left edge makes them easy 477 to spot as overrides to the normal flow of control, which they are.</p> 478 479 <p>When there's a shorter way to say something, we tend to do that for 480 consistency. For example, we tend to say "*blah" instead of "blah[0]" unless 481 we're referring to more than one element of blah. Similarly, NULL is 482 really just 0 (and C will automatically typecast 0 to anything, except in 483 varargs), "if (function() != NULL)" is the same as "if (function())", 484 "x = (blah == NULL);" is "x = !blah;", and so on.</p> 485 486 <p>The goal is to be 487 concise, not cryptic: if you're worried about the code being hard to 488 understand, splitting it to multiple steps on multiple lines is 489 better than a NOP operation like "!= NULL". A common sign of trying to 490 hard is nesting ? : three levels deep, sometimes if/else and a temporary 491 variable is just plain easier to read. If you think you need a comment, 492 you may be right.</p> 493 494 <p>Comments are nice, but don't overdo it. Comments should explain _why_, 495 not how. If the code doesn't make the how part obvious, that's a problem with 496 the code. Sometimes choosing a better variable name is more revealing than a 497 comment. Comments on their own line are better than comments on the end of 498 lines, and they usually have a blank line before them. Most of toybox's 499 comments are c99 style // single line comments, even when there's more than 500 one of them. The /* multiline */ style is used at the start for the metadata, 501 but not so much in the code itself. They don't nest cleanly, are easy to leave 502 accidentally unterminated, need extra nonfunctional * to look right, and if 503 you need _that_ much explanation maybe what you really need is a URL citation 504 linking to a standards document? Long comments can fall out of sync with what 505 the code is doing. Comments do not get regression tested. There's no such 506 thing as self-documenting code (if nothing else, code with _no_ comments 507 is a bit unfriendly to new readers), but "chocolate sauce isn't the answer 508 to bad cooking" either. Don't use comments as a crutch to explain unclear 509 code if the code can be fixed.</p> 510 511 <!--#include file="footer.html" --> 512