1 # Graphs and Sessions 2 3 TensorFlow uses a **dataflow graph** to represent your computation in terms of 4 the dependencies between individual operations. This leads to a low-level 5 programming model in which you first define the dataflow graph, then create a 6 TensorFlow **session** to run parts of the graph across a set of local and 7 remote devices. 8 9 This guide will be most useful if you intend to use the low-level programming 10 model directly. Higher-level APIs such as @{tf.estimator.Estimator} and Keras 11 hide the details of graphs and sessions from the end user, but this guide may 12 also be useful if you want to understand how these APIs are implemented. 13 14 ## Why dataflow graphs? 15 16  17 18 [Dataflow](https://en.wikipedia.org/wiki/Dataflow_programming) is a common 19 programming model for parallel computing. In a dataflow graph, the nodes 20 represent units of computation, and the edges represent the data consumed or 21 produced by a computation. For example, in a TensorFlow graph, the @{tf.matmul} 22 operation would correspond to a single node with two incoming edges (the 23 matrices to be multiplied) and one outgoing edge (the result of the 24 multiplication). 25 26 <!-- TODO(barryr): Add a diagram to illustrate the @{tf.matmul} graph. --> 27 28 Dataflow has several advantages that TensorFlow leverages when executing your 29 programs: 30 31 * **Parallelism.** By using explicit edges to represent dependencies between 32 operations, it is easy for the system to identify operations that can execute 33 in parallel. 34 35 * **Distributed execution.** By using explicit edges to represent the values 36 that flow between operations, it is possible for TensorFlow to partition your 37 program across multiple devices (CPUs, GPUs, and TPUs) attached to different 38 machines. TensorFlow inserts the necessary communication and coordination 39 between devices. 40 41 * **Compilation.** TensorFlow's @{$performance/xla$XLA compiler} can 42 use the information in your dataflow graph to generate faster code, for 43 example, by fusing together adjacent operations. 44 45 * **Portability.** The dataflow graph is a language-independent representation 46 of the code in your model. You can build a dataflow graph in Python, store it 47 in a @{$saved_model$SavedModel}, and restore it in a C++ program for 48 low-latency inference. 49 50 51 ## What is a @{tf.Graph}? 52 53 A @{tf.Graph} contains two relevant kinds of information: 54 55 * **Graph structure.** The nodes and edges of the graph, indicating how 56 individual operations are composed together, but not prescribing how they 57 should be used. The graph structure is like assembly code: inspecting it can 58 convey some useful information, but it does not contain all of the useful 59 context that source code conveys. 60 61 * **Graph collections.** TensorFlow provides a general mechanism for storing 62 collections of metadata in a @{tf.Graph}. The @{tf.add_to_collection} function 63 enables you to associate a list of objects with a key (where @{tf.GraphKeys} 64 defines some of the standard keys), and @{tf.get_collection} enables you to 65 look up all objects associated with a key. Many parts of the TensorFlow 66 library use this facility: for example, when you create a @{tf.Variable}, it 67 is added by default to collections representing "global variables" and 68 "trainable variables". When you later come to create a @{tf.train.Saver} or 69 @{tf.train.Optimizer}, the variables in these collections are used as the 70 default arguments. 71 72 73 ## Building a @{tf.Graph} 74 75 Most TensorFlow programs start with a dataflow graph construction phase. In this 76 phase, you invoke TensorFlow API functions that construct new @{tf.Operation} 77 (node) and @{tf.Tensor} (edge) objects and add them to a @{tf.Graph} 78 instance. TensorFlow provides a **default graph** that is an implicit argument 79 to all API functions in the same context. For example: 80 81 * Calling `tf.constant(42.0)` creates a single @{tf.Operation} that produces the 82 value `42.0`, adds it to the default graph, and returns a @{tf.Tensor} that 83 represents the value of the constant. 84 85 * Calling `tf.matmul(x, y)` creates a single @{tf.Operation} that multiplies 86 the values of @{tf.Tensor} objects `x` and `y`, adds it to the default graph, 87 and returns a @{tf.Tensor} that represents the result of the multiplication. 88 89 * Executing `v = tf.Variable(0)` adds to the graph a @{tf.Operation} that will 90 store a writeable tensor value that persists between @{tf.Session.run} calls. 91 The @{tf.Variable} object wraps this operation, and can be used [like a 92 tensor](#tensor-like_objects), which will read the current value of the 93 stored value. The @{tf.Variable} object also has methods such as 94 @{tf.Variable.assign$`assign`} and @{tf.Variable.assign_add$`assign_add`} that 95 create @{tf.Operation} objects that, when executed, update the stored value. 96 (See @{$programmers_guide/variables} for more information about variables.) 97 98 * Calling @{tf.train.Optimizer.minimize} will add operations and tensors to the 99 default graph that calculate gradients, and return a @{tf.Operation} that, 100 when run, will apply those gradients to a set of variables. 101 102 Most programs rely solely on the default graph. However, 103 see [Dealing with multiple graphs](#programming_with_multiple_graphs) for more 104 advanced use cases. High-level APIs such as the @{tf.estimator.Estimator} API 105 manage the default graph on your behalf, and--for example--may create different 106 graphs for training and evaluation. 107 108 Note: Calling most functions in the TensorFlow API merely adds operations 109 and tensors to the default graph, but **does not** perform the actual 110 computation. Instead, you compose these functions until you have a @{tf.Tensor} 111 or @{tf.Operation} that represents the overall computation--such as performing 112 one step of gradient descent--and then pass that object to a @{tf.Session} to 113 perform the computation. See the section "Executing a graph in a @{tf.Session}" 114 for more details. 115 116 ## Naming operations 117 118 A @{tf.Graph} object defines a **namespace** for the @{tf.Operation} objects it 119 contains. TensorFlow automatically chooses a unique name for each operation in 120 your graph, but giving operations descriptive names can make your program easier 121 to read and debug. The TensorFlow API provides two ways to override the name of 122 an operation: 123 124 * Each API function that creates a new @{tf.Operation} or returns a new 125 @{tf.Tensor} accepts an optional `name` argument. For example, 126 `tf.constant(42.0, name="answer")` creates a new @{tf.Operation} named 127 `"answer"` and returns a @{tf.Tensor} named `"answer:0"`. If the default graph 128 already contains an operation named `"answer"`, then TensorFlow would append 129 `"_1"`, `"_2"`, and so on to the name, in order to make it unique. 130 131 * The @{tf.name_scope} function makes it possible to add a **name scope** prefix 132 to all operations created in a particular context. The current name scope 133 prefix is a `"/"`-delimited list of the names of all active @{tf.name_scope} 134 context managers. If a name scope has already been used in the current 135 context, TensorFlow appends `"_1"`, `"_2"`, and so on. For example: 136 137 ```python 138 c_0 = tf.constant(0, name="c") # => operation named "c" 139 140 # Already-used names will be "uniquified". 141 c_1 = tf.constant(2, name="c") # => operation named "c_1" 142 143 # Name scopes add a prefix to all operations created in the same context. 144 with tf.name_scope("outer"): 145 c_2 = tf.constant(2, name="c") # => operation named "outer/c" 146 147 # Name scopes nest like paths in a hierarchical file system. 148 with tf.name_scope("inner"): 149 c_3 = tf.constant(3, name="c") # => operation named "outer/inner/c" 150 151 # Exiting a name scope context will return to the previous prefix. 152 c_4 = tf.constant(4, name="c") # => operation named "outer/c_1" 153 154 # Already-used name scopes will be "uniquified". 155 with tf.name_scope("inner"): 156 c_5 = tf.constant(5, name="c") # => operation named "outer/inner_1/c" 157 ``` 158 159 The graph visualizer uses name scopes to group operations and reduce the visual 160 complexity of a graph. See [Visualizing your graph](#visualizing-your-graph) for 161 more information. 162 163 Note that @{tf.Tensor} objects are implicitly named after the @{tf.Operation} 164 that produces the tensor as output. A tensor name has the form `"<OP_NAME>:<i>"` 165 where: 166 167 * `"<OP_NAME>"` is the name of the operation that produces it. 168 * `"<i>"` is an integer representing the index of that tensor among the 169 operation's outputs. 170 171 ## Placing operations on different devices 172 173 If you want your TensorFlow program to use multiple different devices, the 174 @{tf.device} function provides a convenient way to request that all operations 175 created in a particular context are placed on the same device (or type of 176 device). 177 178 A **device specification** has the following form: 179 180 ``` 181 /job:<JOB_NAME>/task:<TASK_INDEX>/device:<DEVICE_TYPE>:<DEVICE_INDEX> 182 ``` 183 184 where: 185 186 * `<JOB_NAME>` is an alpha-numeric string that does not start with a number. 187 * `<DEVICE_TYPE>` is a registered device type (such as `GPU` or `CPU`). 188 * `<TASK_INDEX>` is a non-negative integer representing the index of the task 189 in the job named `<JOB_NAME>`. See @{tf.train.ClusterSpec} for an explanation 190 of jobs and tasks. 191 * `<DEVICE_INDEX>` is a non-negative integer representing the index of the 192 device, for example, to distinguish between different GPU devices used in the 193 same process. 194 195 You do not need to specify every part of a device specification. For example, 196 if you are running in a single-machine configuration with a single GPU, you 197 might use @{tf.device} to pin some operations to the CPU and GPU: 198 199 ```python 200 # Operations created outside either context will run on the "best possible" 201 # device. For example, if you have a GPU and a CPU available, and the operation 202 # has a GPU implementation, TensorFlow will choose the GPU. 203 weights = tf.random_normal(...) 204 205 with tf.device("/device:CPU:0"): 206 # Operations created in this context will be pinned to the CPU. 207 img = tf.decode_jpeg(tf.read_file("img.jpg")) 208 209 with tf.device("/device:GPU:0"): 210 # Operations created in this context will be pinned to the GPU. 211 result = tf.matmul(weights, img) 212 ``` 213 214 If you are deploying TensorFlow in a @{$deploy/distributed$typical distributed 215 configuration}, you might specify the job name and task ID to place variables on 216 a task in the parameter server job (`"/job:ps"`), and the other operations on 217 task in the worker job (`"/job:worker"`): 218 219 ```python 220 with tf.device("/job:ps/task:0"): 221 weights_1 = tf.Variable(tf.truncated_normal([784, 100])) 222 biases_1 = tf.Variable(tf.zeroes([100])) 223 224 with tf.device("/job:ps/task:1"): 225 weights_2 = tf.Variable(tf.truncated_normal([100, 10])) 226 biases_2 = tf.Variable(tf.zeroes([10])) 227 228 with tf.device("/job:worker"): 229 layer_1 = tf.matmul(train_batch, weights_1) + biases_1 230 layer_2 = tf.matmul(train_batch, weights_2) + biases_2 231 ``` 232 233 @{tf.device} gives you a lot of flexibility to choose placements for individual 234 operations or broad regions of a TensorFlow graph. In many cases, there are 235 simple heuristics that work well. For example, the 236 @{tf.train.replica_device_setter} API can be used with @{tf.device} to place 237 operations for **data-parallel distributed training**. For example, the 238 following code fragment shows how @{tf.train.replica_device_setter} applies 239 different placement policies to @{tf.Variable} objects and other operations: 240 241 ```python 242 with tf.device(tf.train.replica_device_setter(ps_tasks=3)): 243 # tf.Variable objects are, by default, placed on tasks in "/job:ps" in a 244 # round-robin fashion. 245 w_0 = tf.Variable(...) # placed on "/job:ps/task:0" 246 b_0 = tf.Variable(...) # placed on "/job:ps/task:1" 247 w_1 = tf.Variable(...) # placed on "/job:ps/task:2" 248 b_1 = tf.Variable(...) # placed on "/job:ps/task:0" 249 250 input_data = tf.placeholder(tf.float32) # placed on "/job:worker" 251 layer_0 = tf.matmul(input_data, w_0) + b_0 # placed on "/job:worker" 252 layer_1 = tf.matmul(layer_0, w_1) + b_1 # placed on "/job:worker" 253 ``` 254 255 ## Tensor-like objects 256 257 Many TensorFlow operations take one or more @{tf.Tensor} objects as arguments. 258 For example, @{tf.matmul} takes two @{tf.Tensor} objects, and @{tf.add_n} takes 259 a list of `n` @{tf.Tensor} objects. For convenience, these functions will accept 260 a **tensor-like object** in place of a @{tf.Tensor}, and implicitly convert it 261 to a @{tf.Tensor} using the @{tf.convert_to_tensor} method. Tensor-like objects 262 include elements of the following types: 263 264 * @{tf.Tensor} 265 * @{tf.Variable} 266 * [`numpy.ndarray`](https://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html) 267 * `list` (and lists of tensor-like objects) 268 * Scalar Python types: `bool`, `float`, `int`, `str` 269 270 You can register additional tensor-like types using 271 @{tf.register_tensor_conversion_function}. 272 273 Note: By default, TensorFlow will create a new @{tf.Tensor} each time you use 274 the same tensor-like object. If the tensor-like object is large (e.g. a 275 `numpy.ndarray` containing a set of training examples) and you use it multiple 276 times, you may run out of memory. To avoid this, manually call 277 @{tf.convert_to_tensor} on the tensor-like object once and use the returned 278 @{tf.Tensor} instead. 279 280 ## Executing a graph in a @{tf.Session} 281 282 TensorFlow uses the @{tf.Session} class to represent a connection between the 283 client program---typically a Python program, although a similar interface is 284 available in other languages---and the C++ runtime. A @{tf.Session} object 285 provides access to devices in the local machine, and remote devices using the 286 distributed TensorFlow runtime. It also caches information about your 287 @{tf.Graph} so that you can efficiently run the same computation multiple times. 288 289 ### Creating a @{tf.Session} 290 291 If you are using the low-level TensorFlow API, you can create a @{tf.Session} 292 for the current default graph as follows: 293 294 ```python 295 # Create a default in-process session. 296 with tf.Session() as sess: 297 # ... 298 299 # Create a remote session. 300 with tf.Session("grpc://example.org:2222"): 301 # ... 302 ``` 303 304 Since a @{tf.Session} owns physical resources (such as GPUs and 305 network connections), it is typically used as a context manager (in a `with` 306 block) that automatically closes the session when you exit the block. It is 307 also possible to create a session without using a `with` block, but you should 308 explicitly call @{tf.Session.close} when you are finished with it to free the 309 resources. 310 311 Note: Higher-level APIs such as @{tf.train.MonitoredTrainingSession} or 312 @{tf.estimator.Estimator} will create and manage a @{tf.Session} for you. These 313 APIs accept optional `target` and `config` arguments (either directly, or as 314 part of a @{tf.estimator.RunConfig} object), with the same meaning as 315 described below. 316 317 @{tf.Session.__init__} accepts three optional arguments: 318 319 * **`target`.** If this argument is left empty (the default), the session will 320 only use devices in the local machine. However, you may also specify a 321 `grpc://` URL to specify the address of a TensorFlow server, which gives the 322 session access to all devices on machines that this server controls. See 323 @{tf.train.Server} for details of how to create a TensorFlow 324 server. For example, in the common **between-graph replication** 325 configuration, the @{tf.Session} connects to a @{tf.train.Server} in the same 326 process as the client. The [distributed TensorFlow](../deploy/distributed.md) 327 deployment guide describes other common scenarios. 328 329 * **`graph`.** By default, a new @{tf.Session} will be bound to---and only able 330 to run operations in---the current default graph. If you are using multiple 331 graphs in your program (see [Programming with multiple 332 graphs](#programming_with_multiple_graphs) for more details), you can specify 333 an explicit @{tf.Graph} when you construct the session. 334 335 * **`config`.** This argument allows you to specify a @{tf.ConfigProto} that 336 controls the behavior of the session. For example, some of the configuration 337 options include: 338 339 * `allow_soft_placement`. Set this to `True` to enable a "soft" device 340 placement algorithm, which ignores @{tf.device} annotations that attempt 341 to place CPU-only operations on a GPU device, and places them on the CPU 342 instead. 343 344 * `cluster_def`. When using distributed TensorFlow, this option allows you 345 to specify what machines to use in the computation, and provide a mapping 346 between job names, task indices, and network addresses. See 347 @{tf.train.ClusterSpec.as_cluster_def} for details. 348 349 * `graph_options.optimizer_options`. Provides control over the optimizations 350 that TensorFlow performs on your graph before executing it. 351 352 * `gpu_options.allow_growth`. Set this to `True` to change the GPU memory 353 allocator so that it gradually increases the amount of memory allocated, 354 rather than allocating most of the memory at startup. 355 356 357 ### Using @{tf.Session.run} to execute operations 358 359 The @{tf.Session.run} method is the main mechanism for running a @{tf.Operation} 360 or evaluating a @{tf.Tensor}. You can pass one or more @{tf.Operation} or 361 @{tf.Tensor} objects to @{tf.Session.run}, and TensorFlow will execute the 362 operations that are needed to compute the result. 363 364 @{tf.Session.run} requires you to specify a list of **fetches**, which determine 365 the return values, and may be a @{tf.Operation}, a @{tf.Tensor}, or 366 a [tensor-like type](#tensor-like-objects) such as @{tf.Variable}. These fetches 367 determine what **subgraph** of the overall @{tf.Graph} must be executed to 368 produce the result: this is the subgraph that contains all operations named in 369 the fetch list, plus all operations whose outputs are used to compute the value 370 of the fetches. For example, the following code fragment shows how different 371 arguments to @{tf.Session.run} cause different subgraphs to be executed: 372 373 ```python 374 x = tf.constant([[37.0, -23.0], [1.0, 4.0]]) 375 w = tf.Variable(tf.random_uniform([2, 2])) 376 y = tf.matmul(x, w) 377 output = tf.nn.softmax(y) 378 init_op = w.initializer 379 380 with tf.Session() as sess: 381 # Run the initializer on `w`. 382 sess.run(init_op) 383 384 # Evaluate `output`. `sess.run(output)` will return a NumPy array containing 385 # the result of the computation. 386 print(sess.run(output)) 387 388 # Evaluate `y` and `output`. Note that `y` will only be computed once, and its 389 # result used both to return `y_val` and as an input to the `tf.nn.softmax()` 390 # op. Both `y_val` and `output_val` will be NumPy arrays. 391 y_val, output_val = sess.run([y, output]) 392 ``` 393 394 @{tf.Session.run} also optionally takes a dictionary of **feeds**, which is a 395 mapping from @{tf.Tensor} objects (typically @{tf.placeholder} tensors) to 396 values (typically Python scalars, lists, or NumPy arrays) that will be 397 substituted for those tensors in the execution. For example: 398 399 ```python 400 # Define a placeholder that expects a vector of three floating-point values, 401 # and a computation that depends on it. 402 x = tf.placeholder(tf.float32, shape=[3]) 403 y = tf.square(x) 404 405 with tf.Session() as sess: 406 # Feeding a value changes the result that is returned when you evaluate `y`. 407 print(sess.run(y, {x: [1.0, 2.0, 3.0]})) # => "[1.0, 4.0, 9.0]" 408 print(sess.run(y, {x: [0.0, 0.0, 5.0]})) # => "[0.0, 0.0, 25.0]" 409 410 # Raises `tf.errors.InvalidArgumentError`, because you must feed a value for 411 # a `tf.placeholder()` when evaluating a tensor that depends on it. 412 sess.run(y) 413 414 # Raises `ValueError`, because the shape of `37.0` does not match the shape 415 # of placeholder `x`. 416 sess.run(y, {x: 37.0}) 417 ``` 418 419 @{tf.Session.run} also accepts an optional `options` argument that enables you 420 to specify options about the call, and an optional `run_metadata` argument that 421 enables you to collect metadata about the execution. For example, you can use 422 these options together to collect tracing information about the execution: 423 424 ``` 425 y = tf.matmul([[37.0, -23.0], [1.0, 4.0]], tf.random_uniform([2, 2])) 426 427 with tf.Session() as sess: 428 # Define options for the `sess.run()` call. 429 options = tf.RunOptions() 430 options.output_partition_graphs = True 431 options.trace_level = tf.RunOptions.FULL_TRACE 432 433 # Define a container for the returned metadata. 434 metadata = tf.RunMetadata() 435 436 sess.run(y, options=options, run_metadata=metadata) 437 438 # Print the subgraphs that executed on each device. 439 print(metadata.partition_graphs) 440 441 # Print the timings of each operation that executed. 442 print(metadata.step_stats) 443 ``` 444 445 446 ## Visualizing your graph 447 448 TensorFlow includes tools that can help you to understand the code in a graph. 449 The **graph visualizer** is a component of TensorBoard that renders the 450 structure of your graph visually in a browser. The easiest way to create a 451 visualization is to pass a @{tf.Graph} when creating the 452 @{tf.summary.FileWriter}: 453 454 ```python 455 # Build your graph. 456 x = tf.constant([[37.0, -23.0], [1.0, 4.0]]) 457 w = tf.Variable(tf.random_uniform([2, 2])) 458 y = tf.matmul(x, w) 459 # ... 460 loss = ... 461 train_op = tf.train.AdagradOptimizer(0.01).minimize(loss) 462 463 with tf.Session() as sess: 464 # `sess.graph` provides access to the graph used in a `tf.Session`. 465 writer = tf.summary.FileWriter("/tmp/log/...", sess.graph) 466 467 # Perform your computation... 468 for i in range(1000): 469 sess.run(train_op) 470 # ... 471 472 writer.close() 473 ``` 474 475 Note: If you are using a @{tf.estimator.Estimator}, the graph (and any 476 summaries) will be logged automatically to the `model_dir` that you specified 477 when creating the estimator. 478 479 You can then open the log in `tensorboard`, navigate to the "Graph" tab, and 480 see a high-level visualization of your graph's structure. Note that a typical 481 TensorFlow graph---especially training graphs with automatically computed 482 gradients---has too many nodes to visualize at once. The graph visualizer makes 483 use of name scopes to group related operations into "super" nodes. You can 484 click on the orange "+" button on any of these super nodes to expand the 485 subgraph inside. 486 487  488 489 For more information about visualizing your TensorFlow application with 490 TensorBoard, see the [TensorBoard tutorial](../get_started/summaries_and_tensorboard.md). 491 492 ## Programming with multiple graphs 493 494 Note: When training a model, a common way of organizing your code is to use one 495 graph for training your model, and a separate graph for evaluating or performing 496 inference with a trained model. In many cases, the inference graph will be 497 different from the training graph: for example, techniques like dropout and 498 batch normalization use different operations in each case. Furthermore, by 499 default utilities like @{tf.train.Saver} use the names of @{tf.Variable} objects 500 (which have names based on an underlying @{tf.Operation}) to identify each 501 variable in a saved checkpoint. When programming this way, you can either use 502 completely separate Python processes to build and execute the graphs, or you can 503 use multiple graphs in the same process. This section describes how to use 504 multiple graphs in the same process. 505 506 As noted above, TensorFlow provides a "default graph" that is implicitly passed 507 to all API functions in the same context. For many applications, a single graph 508 is sufficient. However, TensorFlow also provides methods for manipulating 509 the default graph, which can be useful in more advanced used cases. For example: 510 511 * A @{tf.Graph} defines the namespace for @{tf.Operation} objects: each 512 operation in a single graph must have a unique name. TensorFlow will 513 "uniquify" the names of operations by appending `"_1"`, `"_2"`, and so on to 514 their names if the requested name is already taken. Using multiple explicitly 515 created graphs gives you more control over what name is given to each 516 operation. 517 518 * The default graph stores information about every @{tf.Operation} and 519 @{tf.Tensor} that was ever added to it. If your program creates a large number 520 of unconnected subgraphs, it may be more efficient to use a different 521 @{tf.Graph} to build each subgraph, so that unrelated state can be garbage 522 collected. 523 524 You can install a different @{tf.Graph} as the default graph, using the 525 @{tf.Graph.as_default} context manager: 526 527 ```python 528 g_1 = tf.Graph() 529 with g_1.as_default(): 530 # Operations created in this scope will be added to `g_1`. 531 c = tf.constant("Node in g_1") 532 533 # Sessions created in this scope will run operations from `g_1`. 534 sess_1 = tf.Session() 535 536 g_2 = tf.Graph() 537 with g_2.as_default(): 538 # Operations created in this scope will be added to `g_2`. 539 d = tf.constant("Node in g_2") 540 541 # Alternatively, you can pass a graph when constructing a `tf.Session`: 542 # `sess_2` will run operations from `g_2`. 543 sess_2 = tf.Session(graph=g_2) 544 545 assert c.graph is g_1 546 assert sess_1.graph is g_1 547 548 assert d.graph is g_2 549 assert sess_2.graph is g_2 550 ``` 551 552 To inspect the current default graph, call @{tf.get_default_graph}, which 553 returns a @{tf.Graph} object: 554 555 ```python 556 # Print all of the operations in the default graph. 557 g = tf.get_default_graph() 558 print(g.get_operations()) 559 ``` 560