1 ******************************** 2 Functional Programming HOWTO 3 ******************************** 4 5 :Author: A. M. Kuchling 6 :Release: 0.32 7 8 In this document, we'll take a tour of Python's features suitable for 9 implementing programs in a functional style. After an introduction to the 10 concepts of functional programming, we'll look at language features such as 11 :term:`iterator`\s and :term:`generator`\s and relevant library modules such as 12 :mod:`itertools` and :mod:`functools`. 13 14 15 Introduction 16 ============ 17 18 This section explains the basic concept of functional programming; if 19 you're just interested in learning about Python language features, 20 skip to the next section on :ref:`functional-howto-iterators`. 21 22 Programming languages support decomposing problems in several different ways: 23 24 * Most programming languages are **procedural**: programs are lists of 25 instructions that tell the computer what to do with the program's input. C, 26 Pascal, and even Unix shells are procedural languages. 27 28 * In **declarative** languages, you write a specification that describes the 29 problem to be solved, and the language implementation figures out how to 30 perform the computation efficiently. SQL is the declarative language you're 31 most likely to be familiar with; a SQL query describes the data set you want 32 to retrieve, and the SQL engine decides whether to scan tables or use indexes, 33 which subclauses should be performed first, etc. 34 35 * **Object-oriented** programs manipulate collections of objects. Objects have 36 internal state and support methods that query or modify this internal state in 37 some way. Smalltalk and Java are object-oriented languages. C++ and Python 38 are languages that support object-oriented programming, but don't force the 39 use of object-oriented features. 40 41 * **Functional** programming decomposes a problem into a set of functions. 42 Ideally, functions only take inputs and produce outputs, and don't have any 43 internal state that affects the output produced for a given input. Well-known 44 functional languages include the ML family (Standard ML, OCaml, and other 45 variants) and Haskell. 46 47 The designers of some computer languages choose to emphasize one 48 particular approach to programming. This often makes it difficult to 49 write programs that use a different approach. Other languages are 50 multi-paradigm languages that support several different approaches. 51 Lisp, C++, and Python are multi-paradigm; you can write programs or 52 libraries that are largely procedural, object-oriented, or functional 53 in all of these languages. In a large program, different sections 54 might be written using different approaches; the GUI might be 55 object-oriented while the processing logic is procedural or 56 functional, for example. 57 58 In a functional program, input flows through a set of functions. Each function 59 operates on its input and produces some output. Functional style discourages 60 functions with side effects that modify internal state or make other changes 61 that aren't visible in the function's return value. Functions that have no side 62 effects at all are called **purely functional**. Avoiding side effects means 63 not using data structures that get updated as a program runs; every function's 64 output must only depend on its input. 65 66 Some languages are very strict about purity and don't even have assignment 67 statements such as ``a=3`` or ``c = a + b``, but it's difficult to avoid all 68 side effects. Printing to the screen or writing to a disk file are side 69 effects, for example. For example, in Python a call to the :func:`print` or 70 :func:`time.sleep` function both return no useful value; they're only called for 71 their side effects of sending some text to the screen or pausing execution for a 72 second. 73 74 Python programs written in functional style usually won't go to the extreme of 75 avoiding all I/O or all assignments; instead, they'll provide a 76 functional-appearing interface but will use non-functional features internally. 77 For example, the implementation of a function will still use assignments to 78 local variables, but won't modify global variables or have other side effects. 79 80 Functional programming can be considered the opposite of object-oriented 81 programming. Objects are little capsules containing some internal state along 82 with a collection of method calls that let you modify this state, and programs 83 consist of making the right set of state changes. Functional programming wants 84 to avoid state changes as much as possible and works with data flowing between 85 functions. In Python you might combine the two approaches by writing functions 86 that take and return instances representing objects in your application (e-mail 87 messages, transactions, etc.). 88 89 Functional design may seem like an odd constraint to work under. Why should you 90 avoid objects and side effects? There are theoretical and practical advantages 91 to the functional style: 92 93 * Formal provability. 94 * Modularity. 95 * Composability. 96 * Ease of debugging and testing. 97 98 99 Formal provability 100 ------------------ 101 102 A theoretical benefit is that it's easier to construct a mathematical proof that 103 a functional program is correct. 104 105 For a long time researchers have been interested in finding ways to 106 mathematically prove programs correct. This is different from testing a program 107 on numerous inputs and concluding that its output is usually correct, or reading 108 a program's source code and concluding that the code looks right; the goal is 109 instead a rigorous proof that a program produces the right result for all 110 possible inputs. 111 112 The technique used to prove programs correct is to write down **invariants**, 113 properties of the input data and of the program's variables that are always 114 true. For each line of code, you then show that if invariants X and Y are true 115 **before** the line is executed, the slightly different invariants X' and Y' are 116 true **after** the line is executed. This continues until you reach the end of 117 the program, at which point the invariants should match the desired conditions 118 on the program's output. 119 120 Functional programming's avoidance of assignments arose because assignments are 121 difficult to handle with this technique; assignments can break invariants that 122 were true before the assignment without producing any new invariants that can be 123 propagated onward. 124 125 Unfortunately, proving programs correct is largely impractical and not relevant 126 to Python software. Even trivial programs require proofs that are several pages 127 long; the proof of correctness for a moderately complicated program would be 128 enormous, and few or none of the programs you use daily (the Python interpreter, 129 your XML parser, your web browser) could be proven correct. Even if you wrote 130 down or generated a proof, there would then be the question of verifying the 131 proof; maybe there's an error in it, and you wrongly believe you've proved the 132 program correct. 133 134 135 Modularity 136 ---------- 137 138 A more practical benefit of functional programming is that it forces you to 139 break apart your problem into small pieces. Programs are more modular as a 140 result. It's easier to specify and write a small function that does one thing 141 than a large function that performs a complicated transformation. Small 142 functions are also easier to read and to check for errors. 143 144 145 Ease of debugging and testing 146 ----------------------------- 147 148 Testing and debugging a functional-style program is easier. 149 150 Debugging is simplified because functions are generally small and clearly 151 specified. When a program doesn't work, each function is an interface point 152 where you can check that the data are correct. You can look at the intermediate 153 inputs and outputs to quickly isolate the function that's responsible for a bug. 154 155 Testing is easier because each function is a potential subject for a unit test. 156 Functions don't depend on system state that needs to be replicated before 157 running a test; instead you only have to synthesize the right input and then 158 check that the output matches expectations. 159 160 161 Composability 162 ------------- 163 164 As you work on a functional-style program, you'll write a number of functions 165 with varying inputs and outputs. Some of these functions will be unavoidably 166 specialized to a particular application, but others will be useful in a wide 167 variety of programs. For example, a function that takes a directory path and 168 returns all the XML files in the directory, or a function that takes a filename 169 and returns its contents, can be applied to many different situations. 170 171 Over time you'll form a personal library of utilities. Often you'll assemble 172 new programs by arranging existing functions in a new configuration and writing 173 a few functions specialized for the current task. 174 175 176 .. _functional-howto-iterators: 177 178 Iterators 179 ========= 180 181 I'll start by looking at a Python language feature that's an important 182 foundation for writing functional-style programs: iterators. 183 184 An iterator is an object representing a stream of data; this object returns the 185 data one element at a time. A Python iterator must support a method called 186 :meth:`~iterator.__next__` that takes no arguments and always returns the next 187 element of the stream. If there are no more elements in the stream, 188 :meth:`~iterator.__next__` must raise the :exc:`StopIteration` exception. 189 Iterators don't have to be finite, though; it's perfectly reasonable to write 190 an iterator that produces an infinite stream of data. 191 192 The built-in :func:`iter` function takes an arbitrary object and tries to return 193 an iterator that will return the object's contents or elements, raising 194 :exc:`TypeError` if the object doesn't support iteration. Several of Python's 195 built-in data types support iteration, the most common being lists and 196 dictionaries. An object is called :term:`iterable` if you can get an iterator 197 for it. 198 199 You can experiment with the iteration interface manually: 200 201 >>> L = [1,2,3] 202 >>> it = iter(L) 203 >>> it #doctest: +ELLIPSIS 204 <...iterator object at ...> 205 >>> it.__next__() # same as next(it) 206 1 207 >>> next(it) 208 2 209 >>> next(it) 210 3 211 >>> next(it) 212 Traceback (most recent call last): 213 File "<stdin>", line 1, in ? 214 StopIteration 215 >>> 216 217 Python expects iterable objects in several different contexts, the most 218 important being the :keyword:`for` statement. In the statement ``for X in Y``, 219 Y must be an iterator or some object for which :func:`iter` can create an 220 iterator. These two statements are equivalent:: 221 222 223 for i in iter(obj): 224 print(i) 225 226 for i in obj: 227 print(i) 228 229 Iterators can be materialized as lists or tuples by using the :func:`list` or 230 :func:`tuple` constructor functions: 231 232 >>> L = [1,2,3] 233 >>> iterator = iter(L) 234 >>> t = tuple(iterator) 235 >>> t 236 (1, 2, 3) 237 238 Sequence unpacking also supports iterators: if you know an iterator will return 239 N elements, you can unpack them into an N-tuple: 240 241 >>> L = [1,2,3] 242 >>> iterator = iter(L) 243 >>> a,b,c = iterator 244 >>> a,b,c 245 (1, 2, 3) 246 247 Built-in functions such as :func:`max` and :func:`min` can take a single 248 iterator argument and will return the largest or smallest element. The ``"in"`` 249 and ``"not in"`` operators also support iterators: ``X in iterator`` is true if 250 X is found in the stream returned by the iterator. You'll run into obvious 251 problems if the iterator is infinite; :func:`max`, :func:`min` 252 will never return, and if the element X never appears in the stream, the 253 ``"in"`` and ``"not in"`` operators won't return either. 254 255 Note that you can only go forward in an iterator; there's no way to get the 256 previous element, reset the iterator, or make a copy of it. Iterator objects 257 can optionally provide these additional capabilities, but the iterator protocol 258 only specifies the :meth:`~iterator.__next__` method. Functions may therefore 259 consume all of the iterator's output, and if you need to do something different 260 with the same stream, you'll have to create a new iterator. 261 262 263 264 Data Types That Support Iterators 265 --------------------------------- 266 267 We've already seen how lists and tuples support iterators. In fact, any Python 268 sequence type, such as strings, will automatically support creation of an 269 iterator. 270 271 Calling :func:`iter` on a dictionary returns an iterator that will loop over the 272 dictionary's keys:: 273 274 >>> m = {'Jan': 1, 'Feb': 2, 'Mar': 3, 'Apr': 4, 'May': 5, 'Jun': 6, 275 ... 'Jul': 7, 'Aug': 8, 'Sep': 9, 'Oct': 10, 'Nov': 11, 'Dec': 12} 276 >>> for key in m: #doctest: +SKIP 277 ... print(key, m[key]) 278 Mar 3 279 Feb 2 280 Aug 8 281 Sep 9 282 Apr 4 283 Jun 6 284 Jul 7 285 Jan 1 286 May 5 287 Nov 11 288 Dec 12 289 Oct 10 290 291 Note that the order is essentially random, because it's based on the hash 292 ordering of the objects in the dictionary. 293 294 Applying :func:`iter` to a dictionary always loops over the keys, but 295 dictionaries have methods that return other iterators. If you want to iterate 296 over values or key/value pairs, you can explicitly call the 297 :meth:`~dict.values` or :meth:`~dict.items` methods to get an appropriate 298 iterator. 299 300 The :func:`dict` constructor can accept an iterator that returns a finite stream 301 of ``(key, value)`` tuples: 302 303 >>> L = [('Italy', 'Rome'), ('France', 'Paris'), ('US', 'Washington DC')] 304 >>> dict(iter(L)) #doctest: +SKIP 305 {'Italy': 'Rome', 'US': 'Washington DC', 'France': 'Paris'} 306 307 Files also support iteration by calling the :meth:`~io.TextIOBase.readline` 308 method until there are no more lines in the file. This means you can read each 309 line of a file like this:: 310 311 for line in file: 312 # do something for each line 313 ... 314 315 Sets can take their contents from an iterable and let you iterate over the set's 316 elements:: 317 318 S = {2, 3, 5, 7, 11, 13} 319 for i in S: 320 print(i) 321 322 323 324 Generator expressions and list comprehensions 325 ============================================= 326 327 Two common operations on an iterator's output are 1) performing some operation 328 for every element, 2) selecting a subset of elements that meet some condition. 329 For example, given a list of strings, you might want to strip off trailing 330 whitespace from each line or extract all the strings containing a given 331 substring. 332 333 List comprehensions and generator expressions (short form: "listcomps" and 334 "genexps") are a concise notation for such operations, borrowed from the 335 functional programming language Haskell (https://www.haskell.org/). You can strip 336 all the whitespace from a stream of strings with the following code:: 337 338 line_list = [' line 1\n', 'line 2 \n', ...] 339 340 # Generator expression -- returns iterator 341 stripped_iter = (line.strip() for line in line_list) 342 343 # List comprehension -- returns list 344 stripped_list = [line.strip() for line in line_list] 345 346 You can select only certain elements by adding an ``"if"`` condition:: 347 348 stripped_list = [line.strip() for line in line_list 349 if line != ""] 350 351 With a list comprehension, you get back a Python list; ``stripped_list`` is a 352 list containing the resulting lines, not an iterator. Generator expressions 353 return an iterator that computes the values as necessary, not needing to 354 materialize all the values at once. This means that list comprehensions aren't 355 useful if you're working with iterators that return an infinite stream or a very 356 large amount of data. Generator expressions are preferable in these situations. 357 358 Generator expressions are surrounded by parentheses ("()") and list 359 comprehensions are surrounded by square brackets ("[]"). Generator expressions 360 have the form:: 361 362 ( expression for expr in sequence1 363 if condition1 364 for expr2 in sequence2 365 if condition2 366 for expr3 in sequence3 ... 367 if condition3 368 for exprN in sequenceN 369 if conditionN ) 370 371 Again, for a list comprehension only the outside brackets are different (square 372 brackets instead of parentheses). 373 374 The elements of the generated output will be the successive values of 375 ``expression``. The ``if`` clauses are all optional; if present, ``expression`` 376 is only evaluated and added to the result when ``condition`` is true. 377 378 Generator expressions always have to be written inside parentheses, but the 379 parentheses signalling a function call also count. If you want to create an 380 iterator that will be immediately passed to a function you can write:: 381 382 obj_total = sum(obj.count for obj in list_all_objects()) 383 384 The ``for...in`` clauses contain the sequences to be iterated over. The 385 sequences do not have to be the same length, because they are iterated over from 386 left to right, **not** in parallel. For each element in ``sequence1``, 387 ``sequence2`` is looped over from the beginning. ``sequence3`` is then looped 388 over for each resulting pair of elements from ``sequence1`` and ``sequence2``. 389 390 To put it another way, a list comprehension or generator expression is 391 equivalent to the following Python code:: 392 393 for expr1 in sequence1: 394 if not (condition1): 395 continue # Skip this element 396 for expr2 in sequence2: 397 if not (condition2): 398 continue # Skip this element 399 ... 400 for exprN in sequenceN: 401 if not (conditionN): 402 continue # Skip this element 403 404 # Output the value of 405 # the expression. 406 407 This means that when there are multiple ``for...in`` clauses but no ``if`` 408 clauses, the length of the resulting output will be equal to the product of the 409 lengths of all the sequences. If you have two lists of length 3, the output 410 list is 9 elements long: 411 412 >>> seq1 = 'abc' 413 >>> seq2 = (1,2,3) 414 >>> [(x, y) for x in seq1 for y in seq2] #doctest: +NORMALIZE_WHITESPACE 415 [('a', 1), ('a', 2), ('a', 3), 416 ('b', 1), ('b', 2), ('b', 3), 417 ('c', 1), ('c', 2), ('c', 3)] 418 419 To avoid introducing an ambiguity into Python's grammar, if ``expression`` is 420 creating a tuple, it must be surrounded with parentheses. The first list 421 comprehension below is a syntax error, while the second one is correct:: 422 423 # Syntax error 424 [x, y for x in seq1 for y in seq2] 425 # Correct 426 [(x, y) for x in seq1 for y in seq2] 427 428 429 Generators 430 ========== 431 432 Generators are a special class of functions that simplify the task of writing 433 iterators. Regular functions compute a value and return it, but generators 434 return an iterator that returns a stream of values. 435 436 You're doubtless familiar with how regular function calls work in Python or C. 437 When you call a function, it gets a private namespace where its local variables 438 are created. When the function reaches a ``return`` statement, the local 439 variables are destroyed and the value is returned to the caller. A later call 440 to the same function creates a new private namespace and a fresh set of local 441 variables. But, what if the local variables weren't thrown away on exiting a 442 function? What if you could later resume the function where it left off? This 443 is what generators provide; they can be thought of as resumable functions. 444 445 Here's the simplest example of a generator function: 446 447 >>> def generate_ints(N): 448 ... for i in range(N): 449 ... yield i 450 451 Any function containing a :keyword:`yield` keyword is a generator function; 452 this is detected by Python's :term:`bytecode` compiler which compiles the 453 function specially as a result. 454 455 When you call a generator function, it doesn't return a single value; instead it 456 returns a generator object that supports the iterator protocol. On executing 457 the ``yield`` expression, the generator outputs the value of ``i``, similar to a 458 ``return`` statement. The big difference between ``yield`` and a ``return`` 459 statement is that on reaching a ``yield`` the generator's state of execution is 460 suspended and local variables are preserved. On the next call to the 461 generator's :meth:`~generator.__next__` method, the function will resume 462 executing. 463 464 Here's a sample usage of the ``generate_ints()`` generator: 465 466 >>> gen = generate_ints(3) 467 >>> gen #doctest: +ELLIPSIS 468 <generator object generate_ints at ...> 469 >>> next(gen) 470 0 471 >>> next(gen) 472 1 473 >>> next(gen) 474 2 475 >>> next(gen) 476 Traceback (most recent call last): 477 File "stdin", line 1, in ? 478 File "stdin", line 2, in generate_ints 479 StopIteration 480 481 You could equally write ``for i in generate_ints(5)``, or ``a,b,c = 482 generate_ints(3)``. 483 484 Inside a generator function, ``return value`` causes ``StopIteration(value)`` 485 to be raised from the :meth:`~generator.__next__` method. Once this happens, or 486 the bottom of the function is reached, the procession of values ends and the 487 generator cannot yield any further values. 488 489 You could achieve the effect of generators manually by writing your own class 490 and storing all the local variables of the generator as instance variables. For 491 example, returning a list of integers could be done by setting ``self.count`` to 492 0, and having the :meth:`~iterator.__next__` method increment ``self.count`` and 493 return it. 494 However, for a moderately complicated generator, writing a corresponding class 495 can be much messier. 496 497 The test suite included with Python's library, 498 :source:`Lib/test/test_generators.py`, contains 499 a number of more interesting examples. Here's one generator that implements an 500 in-order traversal of a tree using generators recursively. :: 501 502 # A recursive generator that generates Tree leaves in in-order. 503 def inorder(t): 504 if t: 505 for x in inorder(t.left): 506 yield x 507 508 yield t.label 509 510 for x in inorder(t.right): 511 yield x 512 513 Two other examples in ``test_generators.py`` produce solutions for the N-Queens 514 problem (placing N queens on an NxN chess board so that no queen threatens 515 another) and the Knight's Tour (finding a route that takes a knight to every 516 square of an NxN chessboard without visiting any square twice). 517 518 519 520 Passing values into a generator 521 ------------------------------- 522 523 In Python 2.4 and earlier, generators only produced output. Once a generator's 524 code was invoked to create an iterator, there was no way to pass any new 525 information into the function when its execution is resumed. You could hack 526 together this ability by making the generator look at a global variable or by 527 passing in some mutable object that callers then modify, but these approaches 528 are messy. 529 530 In Python 2.5 there's a simple way to pass values into a generator. 531 :keyword:`yield` became an expression, returning a value that can be assigned to 532 a variable or otherwise operated on:: 533 534 val = (yield i) 535 536 I recommend that you **always** put parentheses around a ``yield`` expression 537 when you're doing something with the returned value, as in the above example. 538 The parentheses aren't always necessary, but it's easier to always add them 539 instead of having to remember when they're needed. 540 541 (:pep:`342` explains the exact rules, which are that a ``yield``-expression must 542 always be parenthesized except when it occurs at the top-level expression on the 543 right-hand side of an assignment. This means you can write ``val = yield i`` 544 but have to use parentheses when there's an operation, as in ``val = (yield i) 545 + 12``.) 546 547 Values are sent into a generator by calling its :meth:`send(value) 548 <generator.send>` method. This method resumes the generator's code and the 549 ``yield`` expression returns the specified value. If the regular 550 :meth:`~generator.__next__` method is called, the ``yield`` returns ``None``. 551 552 Here's a simple counter that increments by 1 and allows changing the value of 553 the internal counter. 554 555 .. testcode:: 556 557 def counter(maximum): 558 i = 0 559 while i < maximum: 560 val = (yield i) 561 # If value provided, change counter 562 if val is not None: 563 i = val 564 else: 565 i += 1 566 567 And here's an example of changing the counter: 568 569 >>> it = counter(10) #doctest: +SKIP 570 >>> next(it) #doctest: +SKIP 571 0 572 >>> next(it) #doctest: +SKIP 573 1 574 >>> it.send(8) #doctest: +SKIP 575 8 576 >>> next(it) #doctest: +SKIP 577 9 578 >>> next(it) #doctest: +SKIP 579 Traceback (most recent call last): 580 File "t.py", line 15, in ? 581 it.next() 582 StopIteration 583 584 Because ``yield`` will often be returning ``None``, you should always check for 585 this case. Don't just use its value in expressions unless you're sure that the 586 :meth:`~generator.send` method will be the only method used to resume your 587 generator function. 588 589 In addition to :meth:`~generator.send`, there are two other methods on 590 generators: 591 592 * :meth:`throw(type, value=None, traceback=None) <generator.throw>` is used to 593 raise an exception inside the generator; the exception is raised by the 594 ``yield`` expression where the generator's execution is paused. 595 596 * :meth:`~generator.close` raises a :exc:`GeneratorExit` exception inside the 597 generator to terminate the iteration. On receiving this exception, the 598 generator's code must either raise :exc:`GeneratorExit` or 599 :exc:`StopIteration`; catching the exception and doing anything else is 600 illegal and will trigger a :exc:`RuntimeError`. :meth:`~generator.close` 601 will also be called by Python's garbage collector when the generator is 602 garbage-collected. 603 604 If you need to run cleanup code when a :exc:`GeneratorExit` occurs, I suggest 605 using a ``try: ... finally:`` suite instead of catching :exc:`GeneratorExit`. 606 607 The cumulative effect of these changes is to turn generators from one-way 608 producers of information into both producers and consumers. 609 610 Generators also become **coroutines**, a more generalized form of subroutines. 611 Subroutines are entered at one point and exited at another point (the top of the 612 function, and a ``return`` statement), but coroutines can be entered, exited, 613 and resumed at many different points (the ``yield`` statements). 614 615 616 Built-in functions 617 ================== 618 619 Let's look in more detail at built-in functions often used with iterators. 620 621 Two of Python's built-in functions, :func:`map` and :func:`filter` duplicate the 622 features of generator expressions: 623 624 :func:`map(f, iterA, iterB, ...) <map>` returns an iterator over the sequence 625 ``f(iterA[0], iterB[0]), f(iterA[1], iterB[1]), f(iterA[2], iterB[2]), ...``. 626 627 >>> def upper(s): 628 ... return s.upper() 629 630 >>> list(map(upper, ['sentence', 'fragment'])) 631 ['SENTENCE', 'FRAGMENT'] 632 >>> [upper(s) for s in ['sentence', 'fragment']] 633 ['SENTENCE', 'FRAGMENT'] 634 635 You can of course achieve the same effect with a list comprehension. 636 637 :func:`filter(predicate, iter) <filter>` returns an iterator over all the 638 sequence elements that meet a certain condition, and is similarly duplicated by 639 list comprehensions. A **predicate** is a function that returns the truth 640 value of some condition; for use with :func:`filter`, the predicate must take a 641 single value. 642 643 >>> def is_even(x): 644 ... return (x % 2) == 0 645 646 >>> list(filter(is_even, range(10))) 647 [0, 2, 4, 6, 8] 648 649 650 This can also be written as a list comprehension: 651 652 >>> list(x for x in range(10) if is_even(x)) 653 [0, 2, 4, 6, 8] 654 655 656 :func:`enumerate(iter) <enumerate>` counts off the elements in the iterable, 657 returning 2-tuples containing the count and each element. :: 658 659 >>> for item in enumerate(['subject', 'verb', 'object']): 660 ... print(item) 661 (0, 'subject') 662 (1, 'verb') 663 (2, 'object') 664 665 :func:`enumerate` is often used when looping through a list and recording the 666 indexes at which certain conditions are met:: 667 668 f = open('data.txt', 'r') 669 for i, line in enumerate(f): 670 if line.strip() == '': 671 print('Blank line at line #%i' % i) 672 673 :func:`sorted(iterable, key=None, reverse=False) <sorted>` collects all the 674 elements of the iterable into a list, sorts the list, and returns the sorted 675 result. The *key* and *reverse* arguments are passed through to the 676 constructed list's :meth:`~list.sort` method. :: 677 678 >>> import random 679 >>> # Generate 8 random numbers between [0, 10000) 680 >>> rand_list = random.sample(range(10000), 8) 681 >>> rand_list #doctest: +SKIP 682 [769, 7953, 9828, 6431, 8442, 9878, 6213, 2207] 683 >>> sorted(rand_list) #doctest: +SKIP 684 [769, 2207, 6213, 6431, 7953, 8442, 9828, 9878] 685 >>> sorted(rand_list, reverse=True) #doctest: +SKIP 686 [9878, 9828, 8442, 7953, 6431, 6213, 2207, 769] 687 688 (For a more detailed discussion of sorting, see the :ref:`sortinghowto`.) 689 690 691 The :func:`any(iter) <any>` and :func:`all(iter) <all>` built-ins look at the 692 truth values of an iterable's contents. :func:`any` returns ``True`` if any element 693 in the iterable is a true value, and :func:`all` returns ``True`` if all of the 694 elements are true values: 695 696 >>> any([0,1,0]) 697 True 698 >>> any([0,0,0]) 699 False 700 >>> any([1,1,1]) 701 True 702 >>> all([0,1,0]) 703 False 704 >>> all([0,0,0]) 705 False 706 >>> all([1,1,1]) 707 True 708 709 710 :func:`zip(iterA, iterB, ...) <zip>` takes one element from each iterable and 711 returns them in a tuple:: 712 713 zip(['a', 'b', 'c'], (1, 2, 3)) => 714 ('a', 1), ('b', 2), ('c', 3) 715 716 It doesn't construct an in-memory list and exhaust all the input iterators 717 before returning; instead tuples are constructed and returned only if they're 718 requested. (The technical term for this behaviour is `lazy evaluation 719 <https://en.wikipedia.org/wiki/Lazy_evaluation>`__.) 720 721 This iterator is intended to be used with iterables that are all of the same 722 length. If the iterables are of different lengths, the resulting stream will be 723 the same length as the shortest iterable. :: 724 725 zip(['a', 'b'], (1, 2, 3)) => 726 ('a', 1), ('b', 2) 727 728 You should avoid doing this, though, because an element may be taken from the 729 longer iterators and discarded. This means you can't go on to use the iterators 730 further because you risk skipping a discarded element. 731 732 733 The itertools module 734 ==================== 735 736 The :mod:`itertools` module contains a number of commonly-used iterators as well 737 as functions for combining several iterators. This section will introduce the 738 module's contents by showing small examples. 739 740 The module's functions fall into a few broad classes: 741 742 * Functions that create a new iterator based on an existing iterator. 743 * Functions for treating an iterator's elements as function arguments. 744 * Functions for selecting portions of an iterator's output. 745 * A function for grouping an iterator's output. 746 747 Creating new iterators 748 ---------------------- 749 750 :func:`itertools.count(n) <itertools.count>` returns an infinite stream of 751 integers, increasing by 1 each time. You can optionally supply the starting 752 number, which defaults to 0:: 753 754 itertools.count() => 755 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 756 itertools.count(10) => 757 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ... 758 759 :func:`itertools.cycle(iter) <itertools.cycle>` saves a copy of the contents of 760 a provided iterable and returns a new iterator that returns its elements from 761 first to last. The new iterator will repeat these elements infinitely. :: 762 763 itertools.cycle([1,2,3,4,5]) => 764 1, 2, 3, 4, 5, 1, 2, 3, 4, 5, ... 765 766 :func:`itertools.repeat(elem, [n]) <itertools.repeat>` returns the provided 767 element *n* times, or returns the element endlessly if *n* is not provided. :: 768 769 itertools.repeat('abc') => 770 abc, abc, abc, abc, abc, abc, abc, abc, abc, abc, ... 771 itertools.repeat('abc', 5) => 772 abc, abc, abc, abc, abc 773 774 :func:`itertools.chain(iterA, iterB, ...) <itertools.chain>` takes an arbitrary 775 number of iterables as input, and returns all the elements of the first 776 iterator, then all the elements of the second, and so on, until all of the 777 iterables have been exhausted. :: 778 779 itertools.chain(['a', 'b', 'c'], (1, 2, 3)) => 780 a, b, c, 1, 2, 3 781 782 :func:`itertools.islice(iter, [start], stop, [step]) <itertools.islice>` returns 783 a stream that's a slice of the iterator. With a single *stop* argument, it 784 will return the first *stop* elements. If you supply a starting index, you'll 785 get *stop-start* elements, and if you supply a value for *step*, elements 786 will be skipped accordingly. Unlike Python's string and list slicing, you can't 787 use negative values for *start*, *stop*, or *step*. :: 788 789 itertools.islice(range(10), 8) => 790 0, 1, 2, 3, 4, 5, 6, 7 791 itertools.islice(range(10), 2, 8) => 792 2, 3, 4, 5, 6, 7 793 itertools.islice(range(10), 2, 8, 2) => 794 2, 4, 6 795 796 :func:`itertools.tee(iter, [n]) <itertools.tee>` replicates an iterator; it 797 returns *n* independent iterators that will all return the contents of the 798 source iterator. 799 If you don't supply a value for *n*, the default is 2. Replicating iterators 800 requires saving some of the contents of the source iterator, so this can consume 801 significant memory if the iterator is large and one of the new iterators is 802 consumed more than the others. :: 803 804 itertools.tee( itertools.count() ) => 805 iterA, iterB 806 807 where iterA -> 808 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 809 810 and iterB -> 811 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ... 812 813 814 Calling functions on elements 815 ----------------------------- 816 817 The :mod:`operator` module contains a set of functions corresponding to Python's 818 operators. Some examples are :func:`operator.add(a, b) <operator.add>` (adds 819 two values), :func:`operator.ne(a, b) <operator.ne>` (same as ``a != b``), and 820 :func:`operator.attrgetter('id') <operator.attrgetter>` 821 (returns a callable that fetches the ``.id`` attribute). 822 823 :func:`itertools.starmap(func, iter) <itertools.starmap>` assumes that the 824 iterable will return a stream of tuples, and calls *func* using these tuples as 825 the arguments:: 826 827 itertools.starmap(os.path.join, 828 [('/bin', 'python'), ('/usr', 'bin', 'java'), 829 ('/usr', 'bin', 'perl'), ('/usr', 'bin', 'ruby')]) 830 => 831 /bin/python, /usr/bin/java, /usr/bin/perl, /usr/bin/ruby 832 833 834 Selecting elements 835 ------------------ 836 837 Another group of functions chooses a subset of an iterator's elements based on a 838 predicate. 839 840 :func:`itertools.filterfalse(predicate, iter) <itertools.filterfalse>` is the 841 opposite of :func:`filter`, returning all elements for which the predicate 842 returns false:: 843 844 itertools.filterfalse(is_even, itertools.count()) => 845 1, 3, 5, 7, 9, 11, 13, 15, ... 846 847 :func:`itertools.takewhile(predicate, iter) <itertools.takewhile>` returns 848 elements for as long as the predicate returns true. Once the predicate returns 849 false, the iterator will signal the end of its results. :: 850 851 def less_than_10(x): 852 return x < 10 853 854 itertools.takewhile(less_than_10, itertools.count()) => 855 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 856 857 itertools.takewhile(is_even, itertools.count()) => 858 0 859 860 :func:`itertools.dropwhile(predicate, iter) <itertools.dropwhile>` discards 861 elements while the predicate returns true, and then returns the rest of the 862 iterable's results. :: 863 864 itertools.dropwhile(less_than_10, itertools.count()) => 865 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, ... 866 867 itertools.dropwhile(is_even, itertools.count()) => 868 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, ... 869 870 :func:`itertools.compress(data, selectors) <itertools.compress>` takes two 871 iterators and returns only those elements of *data* for which the corresponding 872 element of *selectors* is true, stopping whenever either one is exhausted:: 873 874 itertools.compress([1,2,3,4,5], [True, True, False, False, True]) => 875 1, 2, 5 876 877 878 Combinatoric functions 879 ---------------------- 880 881 The :func:`itertools.combinations(iterable, r) <itertools.combinations>` 882 returns an iterator giving all possible *r*-tuple combinations of the 883 elements contained in *iterable*. :: 884 885 itertools.combinations([1, 2, 3, 4, 5], 2) => 886 (1, 2), (1, 3), (1, 4), (1, 5), 887 (2, 3), (2, 4), (2, 5), 888 (3, 4), (3, 5), 889 (4, 5) 890 891 itertools.combinations([1, 2, 3, 4, 5], 3) => 892 (1, 2, 3), (1, 2, 4), (1, 2, 5), (1, 3, 4), (1, 3, 5), (1, 4, 5), 893 (2, 3, 4), (2, 3, 5), (2, 4, 5), 894 (3, 4, 5) 895 896 The elements within each tuple remain in the same order as 897 *iterable* returned them. For example, the number 1 is always before 898 2, 3, 4, or 5 in the examples above. A similar function, 899 :func:`itertools.permutations(iterable, r=None) <itertools.permutations>`, 900 removes this constraint on the order, returning all possible 901 arrangements of length *r*:: 902 903 itertools.permutations([1, 2, 3, 4, 5], 2) => 904 (1, 2), (1, 3), (1, 4), (1, 5), 905 (2, 1), (2, 3), (2, 4), (2, 5), 906 (3, 1), (3, 2), (3, 4), (3, 5), 907 (4, 1), (4, 2), (4, 3), (4, 5), 908 (5, 1), (5, 2), (5, 3), (5, 4) 909 910 itertools.permutations([1, 2, 3, 4, 5]) => 911 (1, 2, 3, 4, 5), (1, 2, 3, 5, 4), (1, 2, 4, 3, 5), 912 ... 913 (5, 4, 3, 2, 1) 914 915 If you don't supply a value for *r* the length of the iterable is used, 916 meaning that all the elements are permuted. 917 918 Note that these functions produce all of the possible combinations by 919 position and don't require that the contents of *iterable* are unique:: 920 921 itertools.permutations('aba', 3) => 922 ('a', 'b', 'a'), ('a', 'a', 'b'), ('b', 'a', 'a'), 923 ('b', 'a', 'a'), ('a', 'a', 'b'), ('a', 'b', 'a') 924 925 The identical tuple ``('a', 'a', 'b')`` occurs twice, but the two 'a' 926 strings came from different positions. 927 928 The :func:`itertools.combinations_with_replacement(iterable, r) <itertools.combinations_with_replacement>` 929 function relaxes a different constraint: elements can be repeated 930 within a single tuple. Conceptually an element is selected for the 931 first position of each tuple and then is replaced before the second 932 element is selected. :: 933 934 itertools.combinations_with_replacement([1, 2, 3, 4, 5], 2) => 935 (1, 1), (1, 2), (1, 3), (1, 4), (1, 5), 936 (2, 2), (2, 3), (2, 4), (2, 5), 937 (3, 3), (3, 4), (3, 5), 938 (4, 4), (4, 5), 939 (5, 5) 940 941 942 Grouping elements 943 ----------------- 944 945 The last function I'll discuss, :func:`itertools.groupby(iter, key_func=None) 946 <itertools.groupby>`, is the most complicated. ``key_func(elem)`` is a function 947 that can compute a key value for each element returned by the iterable. If you 948 don't supply a key function, the key is simply each element itself. 949 950 :func:`~itertools.groupby` collects all the consecutive elements from the 951 underlying iterable that have the same key value, and returns a stream of 952 2-tuples containing a key value and an iterator for the elements with that key. 953 954 :: 955 956 city_list = [('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL'), 957 ('Anchorage', 'AK'), ('Nome', 'AK'), 958 ('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ'), 959 ... 960 ] 961 962 def get_state(city_state): 963 return city_state[1] 964 965 itertools.groupby(city_list, get_state) => 966 ('AL', iterator-1), 967 ('AK', iterator-2), 968 ('AZ', iterator-3), ... 969 970 where 971 iterator-1 => 972 ('Decatur', 'AL'), ('Huntsville', 'AL'), ('Selma', 'AL') 973 iterator-2 => 974 ('Anchorage', 'AK'), ('Nome', 'AK') 975 iterator-3 => 976 ('Flagstaff', 'AZ'), ('Phoenix', 'AZ'), ('Tucson', 'AZ') 977 978 :func:`~itertools.groupby` assumes that the underlying iterable's contents will 979 already be sorted based on the key. Note that the returned iterators also use 980 the underlying iterable, so you have to consume the results of iterator-1 before 981 requesting iterator-2 and its corresponding key. 982 983 984 The functools module 985 ==================== 986 987 The :mod:`functools` module in Python 2.5 contains some higher-order functions. 988 A **higher-order function** takes one or more functions as input and returns a 989 new function. The most useful tool in this module is the 990 :func:`functools.partial` function. 991 992 For programs written in a functional style, you'll sometimes want to construct 993 variants of existing functions that have some of the parameters filled in. 994 Consider a Python function ``f(a, b, c)``; you may wish to create a new function 995 ``g(b, c)`` that's equivalent to ``f(1, b, c)``; you're filling in a value for 996 one of ``f()``'s parameters. This is called "partial function application". 997 998 The constructor for :func:`~functools.partial` takes the arguments 999 ``(function, arg1, arg2, ..., kwarg1=value1, kwarg2=value2)``. The resulting 1000 object is callable, so you can just call it to invoke ``function`` with the 1001 filled-in arguments. 1002 1003 Here's a small but realistic example:: 1004 1005 import functools 1006 1007 def log(message, subsystem): 1008 """Write the contents of 'message' to the specified subsystem.""" 1009 print('%s: %s' % (subsystem, message)) 1010 ... 1011 1012 server_log = functools.partial(log, subsystem='server') 1013 server_log('Unable to open socket') 1014 1015 :func:`functools.reduce(func, iter, [initial_value]) <functools.reduce>` 1016 cumulatively performs an operation on all the iterable's elements and, 1017 therefore, can't be applied to infinite iterables. *func* must be a function 1018 that takes two elements and returns a single value. :func:`functools.reduce` 1019 takes the first two elements A and B returned by the iterator and calculates 1020 ``func(A, B)``. It then requests the third element, C, calculates 1021 ``func(func(A, B), C)``, combines this result with the fourth element returned, 1022 and continues until the iterable is exhausted. If the iterable returns no 1023 values at all, a :exc:`TypeError` exception is raised. If the initial value is 1024 supplied, it's used as a starting point and ``func(initial_value, A)`` is the 1025 first calculation. :: 1026 1027 >>> import operator, functools 1028 >>> functools.reduce(operator.concat, ['A', 'BB', 'C']) 1029 'ABBC' 1030 >>> functools.reduce(operator.concat, []) 1031 Traceback (most recent call last): 1032 ... 1033 TypeError: reduce() of empty sequence with no initial value 1034 >>> functools.reduce(operator.mul, [1,2,3], 1) 1035 6 1036 >>> functools.reduce(operator.mul, [], 1) 1037 1 1038 1039 If you use :func:`operator.add` with :func:`functools.reduce`, you'll add up all the 1040 elements of the iterable. This case is so common that there's a special 1041 built-in called :func:`sum` to compute it: 1042 1043 >>> import functools, operator 1044 >>> functools.reduce(operator.add, [1,2,3,4], 0) 1045 10 1046 >>> sum([1,2,3,4]) 1047 10 1048 >>> sum([]) 1049 0 1050 1051 For many uses of :func:`functools.reduce`, though, it can be clearer to just 1052 write the obvious :keyword:`for` loop:: 1053 1054 import functools 1055 # Instead of: 1056 product = functools.reduce(operator.mul, [1,2,3], 1) 1057 1058 # You can write: 1059 product = 1 1060 for i in [1,2,3]: 1061 product *= i 1062 1063 A related function is `itertools.accumulate(iterable, func=operator.add) <itertools.accumulate`. 1064 It performs the same calculation, but instead of returning only the 1065 final result, :func:`accumulate` returns an iterator that also yields 1066 each partial result:: 1067 1068 itertools.accumulate([1,2,3,4,5]) => 1069 1, 3, 6, 10, 15 1070 1071 itertools.accumulate([1,2,3,4,5], operator.mul) => 1072 1, 2, 6, 24, 120 1073 1074 1075 The operator module 1076 ------------------- 1077 1078 The :mod:`operator` module was mentioned earlier. It contains a set of 1079 functions corresponding to Python's operators. These functions are often useful 1080 in functional-style code because they save you from writing trivial functions 1081 that perform a single operation. 1082 1083 Some of the functions in this module are: 1084 1085 * Math operations: ``add()``, ``sub()``, ``mul()``, ``floordiv()``, ``abs()``, ... 1086 * Logical operations: ``not_()``, ``truth()``. 1087 * Bitwise operations: ``and_()``, ``or_()``, ``invert()``. 1088 * Comparisons: ``eq()``, ``ne()``, ``lt()``, ``le()``, ``gt()``, and ``ge()``. 1089 * Object identity: ``is_()``, ``is_not()``. 1090 1091 Consult the operator module's documentation for a complete list. 1092 1093 1094 Small functions and the lambda expression 1095 ========================================= 1096 1097 When writing functional-style programs, you'll often need little functions that 1098 act as predicates or that combine elements in some way. 1099 1100 If there's a Python built-in or a module function that's suitable, you don't 1101 need to define a new function at all:: 1102 1103 stripped_lines = [line.strip() for line in lines] 1104 existing_files = filter(os.path.exists, file_list) 1105 1106 If the function you need doesn't exist, you need to write it. One way to write 1107 small functions is to use the :keyword:`lambda` statement. ``lambda`` takes a 1108 number of parameters and an expression combining these parameters, and creates 1109 an anonymous function that returns the value of the expression:: 1110 1111 adder = lambda x, y: x+y 1112 1113 print_assign = lambda name, value: name + '=' + str(value) 1114 1115 An alternative is to just use the ``def`` statement and define a function in the 1116 usual way:: 1117 1118 def adder(x, y): 1119 return x + y 1120 1121 def print_assign(name, value): 1122 return name + '=' + str(value) 1123 1124 Which alternative is preferable? That's a style question; my usual course is to 1125 avoid using ``lambda``. 1126 1127 One reason for my preference is that ``lambda`` is quite limited in the 1128 functions it can define. The result has to be computable as a single 1129 expression, which means you can't have multiway ``if... elif... else`` 1130 comparisons or ``try... except`` statements. If you try to do too much in a 1131 ``lambda`` statement, you'll end up with an overly complicated expression that's 1132 hard to read. Quick, what's the following code doing? :: 1133 1134 import functools 1135 total = functools.reduce(lambda a, b: (0, a[1] + b[1]), items)[1] 1136 1137 You can figure it out, but it takes time to disentangle the expression to figure 1138 out what's going on. Using a short nested ``def`` statements makes things a 1139 little bit better:: 1140 1141 import functools 1142 def combine(a, b): 1143 return 0, a[1] + b[1] 1144 1145 total = functools.reduce(combine, items)[1] 1146 1147 But it would be best of all if I had simply used a ``for`` loop:: 1148 1149 total = 0 1150 for a, b in items: 1151 total += b 1152 1153 Or the :func:`sum` built-in and a generator expression:: 1154 1155 total = sum(b for a,b in items) 1156 1157 Many uses of :func:`functools.reduce` are clearer when written as ``for`` loops. 1158 1159 Fredrik Lundh once suggested the following set of rules for refactoring uses of 1160 ``lambda``: 1161 1162 1. Write a lambda function. 1163 2. Write a comment explaining what the heck that lambda does. 1164 3. Study the comment for a while, and think of a name that captures the essence 1165 of the comment. 1166 4. Convert the lambda to a def statement, using that name. 1167 5. Remove the comment. 1168 1169 I really like these rules, but you're free to disagree 1170 about whether this lambda-free style is better. 1171 1172 1173 Revision History and Acknowledgements 1174 ===================================== 1175 1176 The author would like to thank the following people for offering suggestions, 1177 corrections and assistance with various drafts of this article: Ian Bicking, 1178 Nick Coghlan, Nick Efford, Raymond Hettinger, Jim Jewett, Mike Krell, Leandro 1179 Lameiro, Jussi Salmela, Collin Winter, Blake Winton. 1180 1181 Version 0.1: posted June 30 2006. 1182 1183 Version 0.11: posted July 1 2006. Typo fixes. 1184 1185 Version 0.2: posted July 10 2006. Merged genexp and listcomp sections into one. 1186 Typo fixes. 1187 1188 Version 0.21: Added more references suggested on the tutor mailing list. 1189 1190 Version 0.30: Adds a section on the ``functional`` module written by Collin 1191 Winter; adds short section on the operator module; a few other edits. 1192 1193 1194 References 1195 ========== 1196 1197 General 1198 ------- 1199 1200 **Structure and Interpretation of Computer Programs**, by Harold Abelson and 1201 Gerald Jay Sussman with Julie Sussman. Full text at 1202 https://mitpress.mit.edu/sicp/. In this classic textbook of computer science, 1203 chapters 2 and 3 discuss the use of sequences and streams to organize the data 1204 flow inside a program. The book uses Scheme for its examples, but many of the 1205 design approaches described in these chapters are applicable to functional-style 1206 Python code. 1207 1208 http://www.defmacro.org/ramblings/fp.html: A general introduction to functional 1209 programming that uses Java examples and has a lengthy historical introduction. 1210 1211 https://en.wikipedia.org/wiki/Functional_programming: General Wikipedia entry 1212 describing functional programming. 1213 1214 https://en.wikipedia.org/wiki/Coroutine: Entry for coroutines. 1215 1216 https://en.wikipedia.org/wiki/Currying: Entry for the concept of currying. 1217 1218 Python-specific 1219 --------------- 1220 1221 http://gnosis.cx/TPiP/: The first chapter of David Mertz's book 1222 :title-reference:`Text Processing in Python` discusses functional programming 1223 for text processing, in the section titled "Utilizing Higher-Order Functions in 1224 Text Processing". 1225 1226 Mertz also wrote a 3-part series of articles on functional programming 1227 for IBM's DeveloperWorks site; see 1228 `part 1 <https://www.ibm.com/developerworks/linux/library/l-prog/index.html>`__, 1229 `part 2 <https://www.ibm.com/developerworks/linux/library/l-prog2/index.html>`__, and 1230 `part 3 <https://www.ibm.com/developerworks/linux/library/l-prog3/index.html>`__, 1231 1232 1233 Python documentation 1234 -------------------- 1235 1236 Documentation for the :mod:`itertools` module. 1237 1238 Documentation for the :mod:`operator` module. 1239 1240 :pep:`289`: "Generator Expressions" 1241 1242 :pep:`342`: "Coroutines via Enhanced Generators" describes the new generator 1243 features in Python 2.5. 1244 1245 .. comment 1246 1247 Handy little function for printing part of an iterator -- used 1248 while writing this document. 1249 1250 import itertools 1251 def print_iter(it): 1252 slice = itertools.islice(it, 10) 1253 for elem in slice[:-1]: 1254 sys.stdout.write(str(elem)) 1255 sys.stdout.write(', ') 1256 print(elem[-1]) 1257