Home | History | Annotate | Download | only in www
      1 <html><head><title>The design of toybox</title></head>
      2 <!--#include file="header.html" -->
      3 
      4 <b><h2>Design goals</h2></b>
      5 
      6 <p>Toybox should be simple, small, fast, and full featured.  Often, these
      7 things need to be balanced off against each other.  In general, keeping the
      8 code simple the most important (and hardest) goal, and small is slightly more
      9 important than fast. Features are the reason we write code in the first
     10 place but this has all been implemented before so if we can't do a better
     11 job why bother?  It should be possible to get 80% of the way to each goal
     12 before they really start to fight.</p>
     13 
     14 <p>Here they are in reverse order of importance:</p>
     15 
     16 <b><h3>Features</h3></b>
     17 
     18 <p>The <a href=roadmap.html>roadmap</a> has the list of features we're
     19 trying to implement, and the reasons for them. After the 1.0 release
     20 some of that material may get moved here.</p>
     21 
     22 <p>Some things are simply outside the scope of the project: even though
     23 posix defines commands for compiling and linking, we're not going to include
     24 a compiler or linker (and support for a potentially infinite number of hardware
     25 targets). And until somebody comes up with a ~30k ssh implementation, we're
     26 going to point you at dropbear or polarssl.</p>
     27 
     28 <p>Environmental dependencies are a type of complexity, so needing other
     29 packages to build or run is a big downside.  For example, we don't use curses
     30 when we can simply output ansi escape sequences and trust all terminal
     31 programs written in the past 30 years to be able to support them. (A common
     32 use case is to download a statically linked toybox binary to an arbitrary
     33 Linux system, and use it in an otherwise unknown environment; being
     34 self-contained helps support this.)</p>
     35 
     36 <b><h3>Speed</h3></b>
     37 
     38 <p>It's easy to say lots about optimizing for speed (which is why this section
     39 is so long), but at the same time it's the optimization we care the least about.
     40 The essence of speed is being as efficient as possible, which means doing as
     41 little work as possible.  A design that's small and simple gets you 90% of the
     42 way there, and most of the rest is either fine-tuning or more trouble than
     43 it's worth (and often actually counterproductive).  Still, here's some
     44 advice:</p>
     45 
     46 <p>First, understand the darn problem you're trying to solve.  You'd think
     47 I wouldn't have to say this, but I do.  Trying to find a faster sorting
     48 algorithm is no substitute for figuring out a way to skip the sorting step
     49 entirely.  The fastest way to do anything is not to have to do it at all,
     50 and _all_ optimization boils down to avoiding unnecessary work.</p>
     51 
     52 <p>Speed is easy to measure; there are dozens of profiling tools for Linux
     53 (although personally I find the "time" command a good starting place).
     54 Don't waste too much time trying to optimize something you can't measure,
     55 and there's no much point speeding up things you don't spend much time doing
     56 anyway.</p>
     57 
     58 <p>Understand the difference between throughput and latency.  Faster
     59 processors improve throughput, but don't always do much for latency.
     60 After 30 years of Moore's Law, most of the remaining problems are latency,
     61 not throughput.  (There are of course a few exceptions, like data compression
     62 code, encryption, rsync...)  Worry about throughput inside long-running
     63 loops, and worry about latency everywhere else.  (And don't worry too much
     64 about avoiding system calls or function calls or anything else in the name
     65 of speed unless you are in the middle of a tight loop that's you've already
     66 proven isn't running fast enough.)</p>
     67 
     68 <p>"Locality of reference" is generally nice, in all sorts of contexts.
     69 It's obvious that waiting for disk access is 1000x slower than doing stuff in
     70 RAM (and making the disk seek is 10x slower than sequential reads/writes),
     71 but it's just as true that a loop which stays in L1 cache is many times faster
     72 than a loop that has to wait for a DRAM fetch on each iteration.  Don't worry
     73 about whether "&" is faster than "%" until your executable loop stays in L1
     74 cache and the data access is fetching cache lines intelligently.  (To
     75 understand DRAM, L1, and L2 cache, read Hannibal's marvelous ram guide at Ars
     76 Technica:
     77 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part1-2.html>part one</a>,
     78 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part2-1.html>part two</a>,
     79 <a href=http://arstechnica.com/paedia/r/ram_guide/ram_guide.part3-1.html>part three</a>,
     80 plus this
     81 <a href=http://arstechnica.com/articles/paedia/cpu/caching.ars/1>article on
     82 cacheing</a>, and this one on
     83 <a href=http://arstechnica.com/articles/paedia/cpu/bandwidth-latency.ars>bandwidth
     84 and latency</a>.
     85 And there's <a href=http://arstechnica.com/paedia/index.html>more where that came from</a>.)
     86 Running out of L1 cache can execute one instruction per clock cycle, going
     87 to L2 cache costs a dozen or so clock cycles, and waiting for a worst case dram
     88 fetch (round trip latency with a bank switch) can cost thousands of
     89 clock cycles.  (Historically, this disparity has gotten worse with time,
     90 just like the speed hit for swapping to disk.  These days, a _big_ L1 cache
     91 is 128k and a big L2 cache is a couple of megabytes.  A cheap low-power
     92 embedded processor may have 8k of L1 cache and no L2.)</p>
     93 
     94 <p>Learn how <a href=http://nommu.org/memory-faq.txt>virtual memory and
     95 memory managment units work</a>.  Don't touch
     96 memory you don't have to.  Even just reading memory evicts stuff from L1 and L2
     97 cache, which may have to be read back in later.  Writing memory can force the
     98 operating system to break copy-on-write, which allocates more memory.  (The
     99 memory returned by malloc() is only a virtual allocation, filled with lots of
    100 copy-on-write mappings of the zero page.  Actual physical pages get allocated
    101 when the copy-on-write gets broken by writing to the virtual page.  This
    102 is why checking the return value of malloc() isn't very useful anymore, it
    103 only detects running out of virtual memory, not physical memory.  Unless
    104 you're using a <a href=http://nommu.org>NOMMU system</a>, where all bets are off.)</p>
    105 
    106 <p>Don't think that just because you don't have a swap file the system can't
    107 start swap thrashing: any file backed page (ala mmap) can be evicted, and
    108 there's a reason all running programs require an executable file (they're
    109 mmaped, and can be flushed back to disk when memory is short).  And long
    110 before that, disk cache gets reclaimed and has to be read back in.  When the
    111 operating system really can't free up any more pages it triggers the out of
    112 memory killer to free up pages by killing processes (the alternative is the
    113 entire OS freezing solid).  Modern operating systems seldom run out of
    114 memory gracefully.</p>
    115 
    116 <p>Also, it's better to be simple than clever.  Many people think that mmap()
    117 is faster than read() because it avoids a copy, but twiddling with the memory
    118 management is itself slow, and can cause unnecessary CPU cache flushes.  And
    119 if a read faults in dozens of pages sequentially, but your mmap iterates
    120 backwards through a file (causing lots of seeks, each of which your program
    121 blocks waiting for), the read can be many times faster.  On the other hand, the
    122 mmap can sometimes use less memory, since the memory provided by mmap
    123 comes from the page cache (allocated anyway), and it can be faster if you're
    124 doing a lot of different updates to the same area.  The moral?  Measure, then
    125 try to speed things up, and measure again to confirm it actually _did_ speed
    126 things up rather than made them worse.  (And understanding what's really going
    127 on underneath is a big help to making it happen faster.)</p>
    128 
    129 <p>In general, being simple is better than being clever.  Optimization
    130 strategies change with time.  For example, decades ago precalculating a table
    131 of results (for things like isdigit() or cosine(int degrees)) was clearly
    132 faster because processors were so slow.  Then processors got faster and grew
    133 math coprocessors, and calculating the value each time became faster than
    134 the table lookup (because the calculation fit in L1 cache but the lookup
    135 had to go out to DRAM).  Then cache sizes got bigger (the Pentium M has
    136 2 megabytes of L2 cache) and the table fit in cache, so the table became
    137 fast again...  Predicting how changes in hardware will affect your algorithm
    138 is difficult, and using ten year old optimization advice and produce
    139 laughably bad results.  But being simple and efficient is always going to
    140 give at least a reasonable result.</p>
    141 
    142 <p>The famous quote from Ken Thompson, "When in doubt, use brute force",
    143 applies to toybox.  Do the simple thing first, do as little of it as possible,
    144 and make sure it's right.  You can always speed it up later.</p>
    145 
    146 <b><h3>Size</h3></b>
    147 <p>Again, simple gives you most of this.  An algorithm that does less work
    148 is generally smaller.  Understand the problem, treat size as a cost, and
    149 get a good bang for the byte.</p>
    150 
    151 <p>Understand the difference between binary size, heap size, and stack size.
    152 Your binary is the executable file on disk, your heap is where malloc() memory
    153 lives, and your stack is where local variables (and function call return
    154 addresses) live.  Optimizing for binary size is generally good: executing
    155 fewer instructions makes your program run faster (and fits more of it in
    156 cache).  On embedded systems, binary size is especially precious because
    157 flash is expensive (and its successor, MRAM, even more so).  Small stack size
    158 is important for nommu systems because they have to preallocate their stack
    159 and can't make it bigger via page fault.  And everybody likes a small heap.</p>
    160 
    161 <p>Measure the right things.  Especially with modern optimizers, expecting
    162 something to be smaller is no guarantee it will be after the compiler's done
    163 with it.  Binary size isn't the most accurate indicator of the impact of a
    164 given change, because lots of things get combined and rounded during
    165 compilation and linking.  Matt Mackall's bloat-o-meter is a python script
    166 which compares two versions of a program, and shows size changes in each
    167 symbol (using the "nm" command behind the scenes).  To use this, run
    168 "make baseline" to build a baseline version to compare against, and
    169 then "make bloatometer" to compare that baseline version against the current
    170 code.</p>
    171 
    172 <p>Avoid special cases.  Whenever you see similar chunks of code in more than
    173 one place, it might be possible to combine them and have the users call shared
    174 code. (This is the most commonly cited trick, which doesn't make it easy. If
    175 seeing two lines of code do the same thing makes you slightly uncomfortable,
    176 you've got the right mindset.)</p>
    177 
    178 <p>Some specific advice: Using a char in place of an int when doing math
    179 produces significantly larger code on some platforms (notably arm),
    180 because each time the compiler has to emit code to convert it to int, do the
    181 math, and convert it back.  Bitfields have this problem on most platforms.
    182 Because of this, using char to index a for() loop is probably not a net win,
    183 although using char (or a bitfield) to store a value in a structure that's
    184 repeated hundreds of times can be a good tradeoff of binary size for heap
    185 space.</p>
    186 
    187 <b><h3>Simple</h3></b>
    188 
    189 <p>Complexity is a cost, just like code size or runtime speed. Treat it as
    190 a cost, and spend your complexity budget wisely. (Sometimes this means you
    191 can't afford a feature because it complicates the code too much to be
    192 worth it.)</p>
    193 
    194 <p>Simplicity has lots of benefits.  Simple code is easy to maintain, easy to
    195 port to new processors, easy to audit for security holes, and easy to
    196 understand.</p>
    197 
    198 <p>Simplicity itself can have subtle non-obvious aspects requiring a tradeoff
    199 between one kind of simplicity and another: simple for the computer to
    200 execute and simple for a human reader to understand aren't always the
    201 same thing. A compact and clever algorithm that does very little work may
    202 not be as easy to explain or understand as a larger more explicit version
    203 requiring more code, memory, and CPU time. When balancing these, err on the
    204 side of doing less work, but add comments describing how you
    205 could be more explicit.</p>
    206 
    207 <p>In general, comments are not a substitute for good code (or well chosen
    208 variable or function names). Commenting "x += y;" with "/* add y to x */"
    209 can actually detract from the program's readability. If you need to describe
    210 what the code is doing (rather than _why_ it's doing it), that means the
    211 code itself isn't very clear.</p>
    212 
    213 <p>Prioritizing simplicity tends to serve our other goals: simplifying code
    214 generally reduces its size (both in terms of binary size and runtime memory
    215 usage), and avoiding unnecessary work makes code run faster. Smaller code
    216 also tends to run faster on modern hardware due to CPU cacheing: fitting your
    217 code into L1 cache is great, and staying in L2 cache is still pretty good.</p>
    218 
    219 <p><a href=http://www.joelonsoftware.com/articles/fog0000000069.html>Joel
    220 Spolsky argues against throwing code out and starting over</a>, and he has
    221 good points: an existing debugged codebase contains a huge amount of baked
    222 in knowledge about strange real-world use cases that the designers didn't
    223 know about until users hit the bugs, and most of this knowledge is never
    224 explicitly stated anywhere except in the source code.</p>
    225 
    226 <p>That said, the Mythical Man-Month's "build one to throw away" advice points
    227 out that until you've solved the problem you don't properly understand it, and
    228 about the time you finish your first version is when you've finally figured
    229 out what you _should_ have done.  (The corrolary is that if you build one
    230 expecting to throw it away, you'll actually wind up throwing away two.  You
    231 don't understand the problem until you _have_ solved it.)</p>
    232 
    233 <p>Joel is talking about what closed source software can afford to do: Code
    234 that works and has been paid for is a corporate asset not lightly abandoned.
    235 Open source software can afford to re-implement code that works, over and
    236 over from scratch, for incremental gains.  Before toybox, the unix command line
    237 has already been reimplemented from scratch several times (the
    238 original AT&amp;T Unix command line in assembly and then in C, the BSD
    239 versions, Coherent was the first full from-scratch Unix clone in 1980,
    240 Minix was another clone which Linux was inspired by and developed under,
    241 the GNU tools were yet another rewrite intended for use in the stillborn
    242 "Hurd" project, BusyBox was still another rewrite, and more versions
    243 were written in Plan 9, uclinux, klibc, sash, sbase, s6, and of course
    244 android toolbox...) but maybe toybox can do a better job. :)</p>
    245 
    246 <p>P.S.  How could I resist linking to an article about
    247 <a href=http://blog.outer-court.com/archive/2005-08-24-n14.html>why
    248 programmers should strive to be lazy and dumb</a>?</p>
    249 
    250 <b><h2>Portability issues</h2></b>
    251 
    252 <b><h3>Platforms</h3></b>
    253 <p>Toybox should run on Android (all commands with musl-libc, as large a subset
    254 as practical with bionic), and every other hardware platform Linux runs on.
    255 Other posix/susv4 environments (perhaps MacOS X or newlib+libgloss) are vaguely
    256 interesting but only if they're easy to support; I'm not going to spend much
    257 effort on them.</p>
    258 
    259 <p>I don't do windows.</p>
    260 
    261 <p>We depend on C99 and posix-2008 libc features such as the openat() family of
    262 functions. We also assume certain "modern" linux kernel behavior such
    263 as large environment sizes (linux commit b6a2fea39318, went into 2.6.22
    264 released July 2007). In theory this shouldn't prevent us from working on
    265 older kernels or other implementations (ala BSD), but we don't police their
    266 corner cases.</p>
    267 
    268 <b><h3>32/64 bit</h3></b>
    269 <p>Toybox should work on both 32 bit and 64 bit systems.  By the end of 2008
    270 64 bit hardware will be the new desktop standard, but 32 bit hardware will
    271 continue to be important in embedded devices for years to come.</p>
    272 
    273 <p>Toybox relies on the fact that on any Unix-like platform, pointer and long
    274 are always the same size (on both 32 and 64 bit).  Pointer and int are _not_
    275 the same size on 64 bit systems, but pointer and long are.</p>
    276 
    277 <p>This is guaranteed by the LP64 memory model, a Unix standard (which Linux
    278 and MacOS X both implement, and which modern 64 bit processors such as
    279 x86-64 were <a href=http://www.pagetable.com/?p=6>designed for</a>).  See
    280 <a href=http://www.unix.org/whitepapers/64bit.html>the LP64 standard</a> and
    281 <a href=http://www.unix.org/version2/whatsnew/lp64_wp.html>the LP64
    282 rationale</a> for details.</p>
    283 
    284 <p>Note that Windows doesn't work like this, and I don't care.
    285 <a href=http://blogs.msdn.com/oldnewthing/archive/2005/01/31/363790.aspx>The
    286 insane legacy reasons why this is broken on Windows are explained here.</a></p>
    287 
    288 <b><h3>Signedness of char</h3></b>
    289 <p>On platforms like x86, variables of type char default to unsigned.  On
    290 platforms like arm, char defaults to signed.  This difference can lead to
    291 subtle portability bugs, and to avoid them we specify which one we want by
    292 feeding the compiler -funsigned-char.</p>
    293 
    294 <p>The reason to pick "unsigned" is that way we're 8-bit clean by default.</p>
    295 
    296 <p><h3>Error messages and internationalization:</h3></p>
    297 
    298 <p>Error messages are extremely terse not just to save bytes, but because we
    299 don't use any sort of _("string") translation infrastructure. (We're not
    300 translating the command names themselves, so we must expect a minimum amount of
    301 english knowledge from our users, but let's keep it to a minimum.)</p>
    302 
    303 <p>Thus "bad -A '%c'" is
    304 preferable to "Unrecognized address base '%c'", because a non-english speaker
    305 can see that -A was the problem (giving back the command line argument they
    306 supplied). A user with a ~20 word english vocabulary is
    307 more likely to know (or guess) "bad" than the longer message, and you can
    308 use "bad" in place of "invalid", "inappropriate", "unrecognized"...
    309 Similarly when atolx_range() complains about range constraints with
    310 "4 < 17" or "12 > 5", it's intentional: those don't need to be translated.</p>
    311 
    312 <p>The strerror() messages produced by perror_exit() and friends should be
    313 localized by libc, and our error functions also prepend the command name
    314 (which non-english speakers can presumably recognize already). Keep the
    315 explanation in between to a minimum, and where possible feed back the values
    316 they passed in to identify _what_ we couldn't process.
    317 If you say perror_exit("setsockopt"), you've identified the action you
    318 were trying to take, and the perror gives a translated error message (from libc)
    319 explaining _why_ it couldn't do it, so you probably don't need to add english
    320 words like "failed" or "couldn't assign".</p>
    321 
    322 <p>All commands should be 8-bit clean, with explicit
    323 <a href=http://yarchive.net/comp/linux/utf8.html>UTF-8</a> support where
    324 necessary. Assume all input data might be utf8, and at least preserve
    325 it and pass it through. (For this reason, our build is -funsigned-char on
    326 all architectures; "char" is unsigned unless you stick "signed" in front
    327 of it.)</p>
    328 
    329 <p>Locale support isn't currently a goal; that's a presentation layer issue
    330 (I.E. a GUI problem).</p>
    331 
    332 <a name="codestyle" />
    333 <h2>Coding style</h2>
    334 
    335 <p>The real coding style holy wars are over things that don't matter
    336 (whitespace, indentation, curly bracket placement...) and thus have no
    337 obviously correct answer. As in academia, "the fighting is so vicious because
    338 the stakes are so small". That said, being consistent makes the code readable,
    339 so here's how to make toybox code look like other toybox code.</p>
    340 
    341 <p>Toybox source uses two spaces per indentation level, and wraps at 80
    342 columns. (Indentation of continuation lines is awkward no matter what
    343 you do, sometimes two spaces looks better, sometimes indenting to the
    344 contents of a parentheses looks better.)</p>
    345 
    346 <p>I'm aware this indentation style creeps some people out, so here's
    347 the sed invocation to convert groups of two leading spaces to tabs:</p>
    348 <blockquote><pre>
    349 sed -i ':loop;s/^\( *\)  /\1\t/;t loop' filename
    350 </pre></blockquote>
    351 
    352 <p>And here's the sed invocation to convert leading tabs to two spaces each:</p>
    353 <blockquote><pre>
    354 sed -i ':loop;s/^\( *\)\t/\1  /;t loop' filename
    355 </pre></blockquote>
    356 
    357 <p>There's a space after C flow control statements that look like functions, so
    358 "if (blah)" instead of "if(blah)". (Note that sizeof is actually an
    359 operator, so we don't give it a space for the same reason ++ doesn't get
    360 one. Yeah, it doesn't need the parentheses either, but it gets them.
    361 These rules are mostly to make the code look consistent, and thus easier
    362 to read.) We also put a space around assignment operators (on both sides),
    363 so "int x = 0;".</p>
    364 
    365 <p>Blank lines (vertical whitespace) go between thoughts. "We were doing that,
    366 now we're doing this." (Not a hard and fast rule about _where_ it goes,
    367 but there should be some for the same reason writing has paragraph breaks.)</p>
    368 
    369 <p>Variable declarations go at the start of blocks, with a blank line between
    370 them and other code. Yes, c99 allows you to put them anywhere, but they're
    371 harder to find if you do that. If there's a large enough distance between
    372 the declaration and the code using it to make you uncomfortable, maybe the
    373 function's too big, or is there an if statement or something you can
    374 use as an excuse to start a new closer block?</p>
    375 
    376 <p>If statments with a single line body go on the same line if the result
    377 fits in 80 columns, on a second line if it doesn't. We usually only use
    378 curly brackets if we need to, either because the body is multiple lines or
    379 because we need to distinguish which if an else binds to. Curly brackets go
    380 on the same line as the test/loop statement. The exception to both cases is
    381 if the test part of an if statement is long enough to split into multiple
    382 lines, then we put the curly bracket on its own line afterwards (so it doesn't
    383 get lost in the multple line variably indented mess), and we put it there
    384 even if it's only grouping one line (because the indentation level is not
    385 providing clear information in that case).</p>
    386 
    387 <p>I.E.</p>
    388 
    389 <blockquote>
    390 <pre>
    391 if (thingy) thingy;
    392 else thingy;
    393 
    394 if (thingy) {
    395   thingy;
    396   thingy;
    397 } else thingy;
    398 
    399 if (blah blah blah...
    400     && blah blah blah)
    401 {
    402   thingy;
    403 }
    404 </pre></blockquote>
    405 
    406 <p>Gotos are allowed for error handling, and for breaking out of
    407 nested loops. In general, a goto should only jump forward (not back), and
    408 should either jump to the end of an outer loop, or to error handling code
    409 at the end of the function. Goto labels are never indented: they override the
    410 block structure of the file. Putting them at the left edge makes them easy
    411 to spot as overrides to the normal flow of control, which they are.</p>
    412 
    413 <p>When there's a shorter way to say something, we tend to do that for
    414 consistency. For example, we tend to say "*blah" instead of "blah[0]" unless
    415 we're referring to more than one element of blah. Similarly, NULL is
    416 really just 0 (and C will automatically typecast 0 to anything, except in
    417 varargs), "if (function() != NULL)" is the same as "if (function())",
    418 "x = (blah == NULL);" is "x = !blah;", and so on.</p>
    419 
    420 <p>The goal is to be
    421 concise, not cryptic: if you're worried about the code being hard to
    422 understand, splitting it to multiple steps on multiple lines is
    423 better than a NOP operation like "!= NULL". A common sign of trying to
    424 hard is nesting ? : three levels deep, sometimes if/else and a temporary
    425 variable is just plain easier to read. If you think you need a comment,
    426 you may be right.</p>
    427 
    428 <p>Comments are nice, but don't overdo it. Comments should explain _why_,
    429 not how. If the code doesn't make the how part obvious, that's a problem with
    430 the code. Sometimes choosing a better variable name is more revealing than a
    431 comment. Comments on their own line are better than comments on the end of
    432 lines, and they usually have a blank line before them. Most of toybox's
    433 comments are c99 style // single line comments, even when there's more than
    434 one of them. The /* multiline */ style is used at the start for the metadata,
    435 but not so much in the code itself. They don't nest cleanly, are easy to leave
    436 accidentally unterminated, need extra nonfunctional * to look right, and if
    437 you need _that_ much explanation maybe what you really need is a URL citation
    438 linking to a standards document? Long comments can fall out of sync with what
    439 the code is doing. Comments do not get regression tested. There's no such
    440 thing as self-documenting code (if nothing else, code with _no_ comments
    441 is a bit unfriendly to new readers), but "chocolate sauce isn't the answer
    442 to bad cooking" either. Don't use comments as a crutch to explain unclear
    443 code if the code can be fixed.</p>
    444 
    445 <!--#include file="footer.html" -->
    446