1 <!--{ 2 "Title": "Diagnostics", 3 "Template": true 4 }--> 5 6 <!-- 7 NOTE: In this document and others in this directory, the convention is to 8 set fixed-width phrases with non-fixed-width spaces, as in 9 <code>hello</code> <code>world</code>. 10 Do not send CLs removing the interior tags from such phrases. 11 --> 12 13 <h2 id="introduction">Introduction</h2> 14 15 <p> 16 The Go ecosystem provides a large suite of APIs and tools to 17 diagnose logic and performance problems in Go programs. This page 18 summarizes the available tools and helps Go users pick the right one 19 for their specific problem. 20 </p> 21 22 <p> 23 Diagnostics solutions can be categorized into the following groups: 24 </p> 25 26 <ul> 27 <li><strong>Profiling</strong>: Profiling tools analyze the complexity and costs of a 28 Go program such as its memory usage and frequently called 29 functions to identify the expensive sections of a Go program.</li> 30 <li><strong>Tracing</strong>: Tracing is a way to instrument code to analyze latency 31 throughout the lifecycle of a call or user request. Traces provide an 32 overview of how much latency each component contributes to the overall 33 latency in a system. Traces can span multiple Go processes.</li> 34 <li><strong>Debugging</strong>: Debugging allows us to pause a Go program and examine 35 its execution. Program state and flow can be verified with debugging.</li> 36 <li><strong>Runtime statistics and events</strong>: Collection and analysis of runtime stats and events 37 provides a high-level overview of the health of Go programs. Spikes/dips of metrics 38 helps us to identify changes in throughput, utilization, and performance.</li> 39 </ul> 40 41 <p> 42 Note: Some diagnostics tools may interfere with each other. For example, precise 43 memory profiling skews CPU profiles and goroutine blocking profiling affects scheduler 44 trace. Use tools in isolation to get more precise info. 45 </p> 46 47 <h2 id="profiling">Profiling</h2> 48 49 <p> 50 Profiling is useful for identifying expensive or frequently called sections 51 of code. The Go runtime provides <a href="https://golang.org/pkg/runtime/pprof/"> 52 profiling data</a> in the format expected by the 53 <a href="https://github.com/google/pprof/blob/master/doc/pprof.md">pprof visualization tool</a>. 54 The profiling data can be collected during testing 55 via <code>go</code> <code>test</code> or endpoints made available from the <a href="/pkg/net/http/pprof/"> 56 net/http/pprof</a> package. Users need to collect the profiling data and use pprof tools to filter 57 and visualize the top code paths. 58 </p> 59 60 <p>Predefined profiles provided by the <a href="/pkg/runtime/pprof">runtime/pprof</a> package:</p> 61 62 <ul> 63 <li> 64 <strong>cpu</strong>: CPU profile determines where a program spends 65 its time while actively consuming CPU cycles (as opposed to while sleeping or waiting for I/O). 66 </li> 67 <li> 68 <strong>heap</strong>: Heap profile reports memory allocation samples; 69 used to monitor current and historical memory usage, and to check for memory leaks. 70 </li> 71 <li> 72 <strong>threadcreate</strong>: Thread creation profile reports the sections 73 of the program that lead the creation of new OS threads. 74 </li> 75 <li> 76 <strong>goroutine</strong>: Goroutine profile reports the stack traces of all current goroutines. 77 </li> 78 <li> 79 <strong>block</strong>: Block profile shows where goroutines block waiting on synchronization 80 primitives (including timer channels). Block profile is not enabled by default; 81 use <code>runtime.SetBlockProfileRate</code> to enable it. 82 </li> 83 <li> 84 <strong>mutex</strong>: Mutex profile reports the lock contentions. When you think your 85 CPU is not fully utilized due to a mutex contention, use this profile. Mutex profile 86 is not enabled by default, see <code>runtime.SetMutexProfileFraction</code> to enable it. 87 </li> 88 </ul> 89 90 91 <p><strong>What other profilers can I use to profile Go programs?</strong></p> 92 93 <p> 94 On Linux, <a href="https://perf.wiki.kernel.org/index.php/Tutorial">perf tools</a> 95 can be used for profiling Go programs. Perf can profile 96 and unwind cgo/SWIG code and kernel, so it can be useful to get insights into 97 native/kernel performance bottlenecks. On macOS, 98 <a href="https://developer.apple.com/library/content/documentation/DeveloperTools/Conceptual/InstrumentsUserGuide/">Instruments</a> 99 suite can be used profile Go programs. 100 </p> 101 102 <p><strong>Can I profile my production services?</strong></p> 103 104 <p>Yes. It is safe to profile programs in production, but enabling 105 some profiles (e.g. the CPU profile) adds cost. You should expect to 106 see performance downgrade. The performance penalty can be estimated 107 by measuring the overhead of the profiler before turning it on in 108 production. 109 </p> 110 111 <p> 112 You may want to periodically profile your production services. 113 Especially in a system with many replicas of a single process, selecting 114 a random replica periodically is a safe option. 115 Select a production process, profile it for 116 X seconds for every Y seconds and save the results for visualization and 117 analysis; then repeat periodically. Results may be manually and/or automatically 118 reviewed to find problems. 119 Collection of profiles can interfere with each other, 120 so it is recommended to collect only a single profile at a time. 121 </p> 122 123 <p> 124 <strong>What are the best ways to visualize the profiling data?</strong> 125 </p> 126 127 <p> 128 The Go tools provide text, graph, and <a href="http://valgrind.org/docs/manual/cl-manual.html">callgrind</a> 129 visualization of the profile data using 130 <code><a href="https://github.com/google/pprof/blob/master/doc/pprof.md">go tool pprof</a></code>. 131 Read <a href="https://blog.golang.org/profiling-go-programs">Profiling Go programs</a> 132 to see them in action. 133 </p> 134 135 <p> 136 <img width="800" src="https://storage.googleapis.com/golangorg-assets/pprof-text.png"> 137 <br> 138 <small>Listing of the most expensive calls as text.</small> 139 </p> 140 141 <p> 142 <img width="800" src="https://storage.googleapis.com/golangorg-assets/pprof-dot.png"> 143 <br> 144 <small>Visualization of the most expensive calls as a graph.</small> 145 </p> 146 147 <p>Weblist view displays the expensive parts of the source line by line in 148 an HTML page. In the following example, 530ms is spent in the 149 <code>runtime.concatstrings</code> and cost of each line is presented 150 in the listing.</p> 151 152 <p> 153 <img width="800" src="https://storage.googleapis.com/golangorg-assets/pprof-weblist.png"> 154 <br> 155 <small>Visualization of the most expensive calls as weblist.</small> 156 </p> 157 158 <p> 159 Another way to visualize profile data is a <a href="http://www.brendangregg.com/flamegraphs.html">flame graph</a>. 160 Flame graphs allow you to move in a specific ancestry path, so you can zoom 161 in/out of specific sections of code. 162 The <a href="https://github.com/google/pprof">upstream pprof</a> 163 has support for flame graphs. 164 </p> 165 166 <p> 167 <img width="800" src="https://storage.googleapis.com/golangorg-assets/flame.png"> 168 <br> 169 <small>Flame graphs offers visualization to spot the most expensive code-paths.</small> 170 </p> 171 172 <p><strong>Am I restricted to the built-in profiles?</strong></p> 173 174 <p> 175 Additionally to what is provided by the runtime, Go users can create 176 their custom profiles via <a href="/pkg/runtime/pprof/#Profile">pprof.Profile</a> 177 and use the existing tools to examine them. 178 </p> 179 180 <p><strong>Can I serve the profiler handlers (/debug/pprof/...) on a different path and port?</strong></p> 181 182 <p> 183 Yes. The <code>net/http/pprof</code> package registers its handlers to the default 184 mux by default, but you can also register them yourself by using the handlers 185 exported from the package. 186 </p> 187 188 <p> 189 For example, the following example will serve the pprof.Profile 190 handler on :7777 at /custom_debug_path/profile: 191 </p> 192 193 <p> 194 <pre> 195 package main 196 197 import ( 198 "log" 199 "net/http" 200 "net/http/pprof" 201 ) 202 203 func main() { 204 mux := http.NewServeMux() 205 mux.HandleFunc("/custom_debug_path/profile", pprof.Profile) 206 log.Fatal(http.ListenAndServe(":7777", mux)) 207 } 208 </pre> 209 </p> 210 211 <h2 id="tracing">Tracing</h2> 212 213 <p> 214 Tracing is a way to instrument code to analyze latency throughout the 215 lifecycle of a chain of calls. Go provides 216 <a href="https://godoc.org/golang.org/x/net/trace">golang.org/x/net/trace</a> 217 package as a minimal tracing backend per Go node and provides a minimal 218 instrumentation library with a simple dashboard. Go also provides 219 an execution tracer to trace the runtime events within an interval. 220 </p> 221 222 <p>Tracing enables us to:</p> 223 224 <ul> 225 <li>Instrument and analyze application latency in a Go process.</li> 226 <li>Measure the cost of specific calls in a long chain of calls.</li> 227 <li>Figure out the utilization and performance improvements. 228 Bottlenecks are not always obvious without tracing data.</li> 229 </ul> 230 231 <p> 232 In monolithic systems, it's relatively easy to collect diagnostic data 233 from the building blocks of a program. All modules live within one 234 process and share common resources to report logs, errors, and other 235 diagnostic information. Once your system grows beyond a single process and 236 starts to become distributed, it becomes harder to follow a call starting 237 from the front-end web server to all of its back-ends until a response is 238 returned back to the user. This is where distributed tracing plays a big 239 role to instrument and analyze your production systems. 240 </p> 241 242 <p> 243 Distributed tracing is a way to instrument code to analyze latency throughout 244 the lifecycle of a user request. When a system is distributed and when 245 conventional profiling and debugging tools dont scale, you might want 246 to use distributed tracing tools to analyze the performance of your user 247 requests and RPCs. 248 </p> 249 250 <p>Distributed tracing enables us to:</p> 251 252 <ul> 253 <li>Instrument and profile application latency in a large system.</li> 254 <li>Track all RPCs within the lifecycle of a user request and see integration issues 255 that are only visible in production.</li> 256 <li>Figure out performance improvements that can be applied to our systems. 257 Many bottlenecks are not obvious before the collection of tracing data.</li> 258 </ul> 259 260 <p>The Go ecosystem provides various distributed tracing libraries per tracing system 261 and backend-agnostic ones.</p> 262 263 264 <p><strong>Is there a way to automatically intercept each function call and create traces?</strong></p> 265 266 <p> 267 Go doesnt provide a way to automatically intercept every function call and create 268 trace spans. You need to manually instrument your code to create, end, and annotate spans. 269 </p> 270 271 <p><strong>How should I propagate trace headers in Go libraries?</strong></p> 272 273 <p> 274 You can propagate trace identifiers and tags in the 275 <a href="/pkg/context#Context"><code>context.Context</code></a>. 276 There is no canonical trace key or common representation of trace headers 277 in the industry yet. Each tracing provider is responsible for providing propagation 278 utilities in their Go libraries. 279 </p> 280 281 <p> 282 <strong>What other low-level events from the standard library or 283 runtime can be included in a trace?</strong> 284 </p> 285 286 <p> 287 The standard library and runtime are trying to expose several additional APIs 288 to notify on low level internal events. For example, 289 <a href="/pkg/net/http/httptrace#ClientTrace"><code>httptrace.ClientTrace</code></a> 290 provides APIs to follow low-level events in the life cycle of an outgoing request. 291 There is an ongoing effort to retrieve low-level runtime events from 292 the runtime execution tracer and allow users to define and record their user events. 293 </p> 294 295 <h2 id="debugging">Debugging</h2> 296 297 <p> 298 Debugging is the process of identifying why a program misbehaves. 299 Debuggers allow us to understand a programs execution flow and current state. 300 There are several styles of debugging; this section will only focus on attaching 301 a debugger to a program and core dump debugging. 302 </p> 303 304 <p>Go users mostly use the following debuggers:</p> 305 306 <ul> 307 <li> 308 <a href="https://github.com/derekparker/delve">Delve</a>: 309 Delve is a debugger for the Go programming language. It has 310 support for Gos runtime concepts and built-in types. Delve is 311 trying to be a fully featured reliable debugger for Go programs. 312 </li> 313 <li> 314 <a href="https://golang.org/doc/gdb">GDB</a>: 315 Go provides GDB support via the standard Go compiler and Gccgo. 316 The stack management, threading, and runtime contain aspects that differ 317 enough from the execution model GDB expects that they can confuse the 318 debugger, even when the program is compiled with gccgo. Even though 319 GDB can be used to debug Go programs, it is not ideal and may 320 create confusion. 321 </li> 322 </ul> 323 324 <p><strong>How well do debuggers work with Go programs?</strong></p> 325 326 <p> 327 The <code>gc</code> compiler performs optimizations such as 328 function inlining and variable registerization. These optimizations 329 sometimes make debugging with debuggers harder. There is an ongoing 330 effort to improve the quality of the DWARF information generated for 331 optimized binaries. Until those improvements are available, we recommend 332 disabling optimizations when building the code being debugged. The following 333 command builds a package with no compiler optimizations: 334 335 <p> 336 <pre> 337 $ go build -gcflags=all="-N -l" 338 </pre> 339 </p> 340 341 As part of the improvement effort, Go 1.10 introduced a new compiler 342 flag <code>-dwarflocationlists</code>. The flag causes the compiler to 343 add location lists that helps debuggers work with optimized binaries. 344 The following command builds a package with optimizations but with 345 the DWARF location lists: 346 347 <p> 348 <pre> 349 $ go build -gcflags="-dwarflocationlists=true" 350 </pre> 351 </p> 352 353 <p><strong>Whats the recommended debugger user interface?</strong></p> 354 355 <p> 356 Even though both delve and gdb provides CLIs, most editor integrations 357 and IDEs provides debugging-specific user interfaces. 358 </p> 359 360 <p><strong>Is it possible to do postmortem debugging with Go programs?</strong></p> 361 362 <p> 363 A core dump file is a file that contains the memory dump of a running 364 process and its process status. It is primarily used for post-mortem 365 debugging of a program and to understand its state 366 while it is still running. These two cases make debugging of core 367 dumps a good diagnostic aid to postmortem and analyze production 368 services. It is possible to obtain core files from Go programs and 369 use delve or gdb to debug, see the 370 <a href="https://golang.org/wiki/CoreDumpDebugging">core dump debugging</a> 371 page for a step-by-step guide. 372 </p> 373 374 <h2 id="runtime">Runtime statistics and events</h2> 375 376 <p> 377 The runtime provides stats and reporting of internal events for 378 users to diagnose performance and utilization problems at the 379 runtime level. 380 </p> 381 382 <p> 383 Users can monitor these stats to better understand the overall 384 health and performance of Go programs. 385 Some frequently monitored stats and states: 386 </p> 387 388 <ul> 389 <li><code><a href="/pkg/runtime/#ReadMemStats">runtime.ReadMemStats</a></code> 390 reports the metrics related to heap 391 allocation and garbage collection. Memory stats are useful for 392 monitoring how much memory resources a process is consuming, 393 whether the process can utilize memory well, and to catch 394 memory leaks.</li> 395 <li><code><a href="/pkg/runtime/debug/#ReadGCStats">debug.ReadGCStats</a></code> 396 reads statistics about garbage collection. 397 It is useful to see how much of the resources are spent on GC pauses. 398 It also reports a timeline of garbage collector pauses and pause time percentiles.</li> 399 <li><code><a href="/pkg/runtime/debug/#Stack">debug.Stack</a></code> 400 returns the current stack trace. Stack trace 401 is useful to see how many goroutines are currently running, 402 what they are doing, and whether they are blocked or not.</li> 403 <li><code><a href="/pkg/runtime/debug/#WriteHeapDump">debug.WriteHeapDump</a></code> 404 suspends the execution of all goroutines 405 and allows you to dump the heap to a file. A heap dump is a 406 snapshot of a Go process' memory at a given time. It contains all 407 allocated objects as well as goroutines, finalizers, and more.</li> 408 <li><code><a href="/pkg/runtime#NumGoroutine">runtime.NumGoroutine</a></code> 409 returns the number of current goroutines. 410 The value can be monitored to see whether enough goroutines are 411 utilized, or to detect goroutine leaks.</li> 412 </ul> 413 414 <h3 id="execution-tracer">Execution tracer</h3> 415 416 <p>Go comes with a runtime execution tracer to capture a wide range 417 of runtime events. Scheduling, syscall, garbage collections, 418 heap size, and other events are collected by runtime and available 419 for visualization by the go tool trace. Execution tracer is a tool 420 to detect latency and utilization problems. You can examine how well 421 the CPU is utilized, and when networking or syscalls are a cause of 422 preemption for the goroutines.</p> 423 424 <p>Tracer is useful to:</p> 425 <ul> 426 <li>Understand how your goroutines execute.</li> 427 <li>Understand some of the core runtime events such as GC runs.</li> 428 <li>Identify poorly parallelized execution.</li> 429 </ul> 430 431 <p>However, it is not great for identifying hot spots such as 432 analyzing the cause of excessive memory or CPU usage. 433 Use profiling tools instead first to address them.</p> 434 435 <p> 436 <img width="800" src="https://storage.googleapis.com/golangorg-assets/tracer-lock.png"> 437 </p> 438 439 <p>Above, the go tool trace visualization shows the execution started 440 fine, and then it became serialized. It suggests that there might 441 be lock contention for a shared resource that creates a bottleneck.</p> 442 443 <p>See <a href="https://golang.org/cmd/trace/"><code>go</code> <code>tool</code> <code>trace</code></a> 444 to collect and analyze runtime traces. 445 </p> 446 447 <h3 id="godebug">GODEBUG</h3> 448 449 <p>Runtime also emits events and information if 450 <a href="https://golang.org/pkg/runtime/#hdr-Environment_Variables">GODEBUG</a> 451 environmental variable is set accordingly.</p> 452 453 <ul> 454 <li>GODEBUG=gctrace=1 prints garbage collector events at 455 each collection, summarizing the amount of memory collected 456 and the length of the pause.</li> 457 <li>GODEBUG=schedtrace=X prints scheduling events every X milliseconds.</li> 458 </ul> 459