1 <html devsite> 2 <head> 3 <title>Audio Terminology</title> 4 <meta name="project_path" value="/_project.yaml" /> 5 <meta name="book_path" value="/_book.yaml" /> 6 </head> 7 <body> 8 <!-- 9 Copyright 2017 The Android Open Source Project 10 11 Licensed under the Apache License, Version 2.0 (the "License"); 12 you may not use this file except in compliance with the License. 13 You may obtain a copy of the License at 14 15 http://www.apache.org/licenses/LICENSE-2.0 16 17 Unless required by applicable law or agreed to in writing, software 18 distributed under the License is distributed on an "AS IS" BASIS, 19 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 20 See the License for the specific language governing permissions and 21 limitations under the License. 22 --> 23 24 25 26 <p> 27 This glossary of audio-related terminology includes widely-used generic terms 28 and Android-specific terms. 29 </p> 30 31 <h2 id="genericTerm">Generic Terms</h2> 32 33 <p> 34 Generic audio-related terms have conventional meanings. 35 </p> 36 37 <h3 id="digitalAudioTerms">Digital Audio</h3> 38 <p> 39 Digital audio terms relate to handling sound using audio signals encoded 40 in digital form. For details, refer to 41 <a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>. 42 </p> 43 44 <dl> 45 46 <dt>acoustics</dt> 47 <dd> 48 Study of the mechanical properties of sound, such as how the physical 49 placement of transducers (speakers, microphones, etc.) on a device affects 50 perceived audio quality. 51 </dd> 52 53 <dt>attenuation</dt> 54 <dd> 55 Multiplicative factor less than or equal to 1.0, applied to an audio signal 56 to decrease the signal level. Compare to <em>gain</em>. 57 </dd> 58 59 <dt>audiophile</dt> 60 <dd> 61 Person concerned with a superior music reproduction experience, especially 62 willing to make substantial tradeoffs (expense, component size, room design, 63 etc.) for sound quality. For details, refer to 64 <a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>. 65 </dd> 66 67 <dt>bits per sample or bit depth</dt> 68 <dd> 69 Number of bits of information per sample. 70 </dd> 71 72 <dt>channel</dt> 73 <dd> 74 Single stream of audio information, usually corresponding to one location of 75 recording or playback. 76 </dd> 77 78 <dt>downmixing</dt> 79 <dd> 80 Decrease the number of channels, such as from stereo to mono or from 5.1 to 81 stereo. Accomplished by dropping channels, mixing channels, or more advanced 82 signal processing. Simple mixing without attenuation or limiting has the 83 potential for overflow and clipping. Compare to <em>upmixing</em>. 84 </dd> 85 86 <dt>DSD</dt> 87 <dd> 88 Direct Stream Digital. Proprietary audio encoding based on 89 <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density 90 modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a 91 sequence of individual audio samples of multiple bits, DSD encodes a waveform as 92 a sequence of bits at a very high sample rate (without the concept of samples). 93 Both PCM and DSD represent multiple channels by independent sequences. DSD is 94 better suited to content distribution than as an internal representation for 95 processing as it can be difficult to apply traditional digital signal processing 96 (DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer 97 to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream 98 Digital</a>. 99 </dd> 100 101 <dt>duck</dt> 102 <dd> 103 Temporarily reduce the volume of a stream when another stream becomes active. 104 For example, if music is playing when a notification arrives, the music ducks 105 while the notification plays. Compare to <em>mute</em>. 106 </dd> 107 108 <dt>FIFO</dt> 109 <dd> 110 First In, First Out. Hardware module or software data structure that implements 111 <a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a> 112 queueing of data. In an audio context, the data stored in the queue are 113 typically audio frames. FIFO can be implemented by a 114 <a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>. 115 </dd> 116 117 <dt>frame</dt> 118 <dd> 119 Set of samples, one per channel, at a point in time. 120 </dd> 121 122 <dt>frames per buffer</dt> 123 <dd> 124 Number of frames handed from one module to the next at one time. The audio HAL 125 interface uses the concept of frames per buffer. 126 </dd> 127 128 <dt>gain</dt> 129 <dd> 130 Multiplicative factor greater than or equal to 1.0, applied to an audio signal 131 to increase the signal level. Compare to <em>attenuation</em>. 132 </dd> 133 134 <dt>HD audio</dt> 135 <dd> 136 High-Definition audio. Synonym for high-resolution audio (but different than 137 Intel High Definition Audio). 138 </dd> 139 140 <dt>Hz</dt> 141 <dd> 142 Units for sample rate or frame rate. 143 </dd> 144 145 <dt>high-resolution audio</dt> 146 <dd> 147 Representation with greater bit-depth and sample rate than CDs (stereo 16-bit 148 PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio. 149 For details, refer to 150 <a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution 151 audio</a>. 152 </dd> 153 154 <dt>latency</dt> 155 <dd> 156 Time delay as a signal passes through a system. 157 </dd> 158 159 <dt>lossless</dt> 160 <dd> 161 A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data 162 compression algorithm</a> that preserves bit accuracy across encoding and 163 decoding, where the result of decoding previously encoded data is equivalent 164 to the original data. Examples of lossless audio content distribution formats 165 include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within 166 <a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and 167 <a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>. 168 The authoring process may reduce the bit depth or sample rate from that of the 169 <a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution 170 formats that preserve the resolution and bit accuracy of masters are the subject 171 of high-resolution audio. 172 </dd> 173 174 <dt>lossy</dt> 175 <dd> 176 A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data 177 compression algorithm</a> that attempts to preserve the most important features 178 of media across encoding and decoding where the result of decoding previously 179 encoded data is perceptually similar to the original data but not identical. 180 Examples of lossy audio compression algorithms include MP3 and AAC. As analog 181 values are from a continuous domain and digital values are discrete, ADC and DAC 182 are lossy conversions with respect to amplitude. See also <em>transparency</em>. 183 </dd> 184 185 <dt>mono</dt> 186 <dd> 187 One channel. 188 </dd> 189 190 <dt>multichannel</dt> 191 <dd> 192 See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one 193 channel and could be considered multichannel; however, such usage is confusing 194 and thus avoided. 195 </dd> 196 197 <dt>mute</dt> 198 <dd> 199 Temporarily force volume to be zero, independent from the usual volume controls. 200 </dd> 201 202 <dt>overrun</dt> 203 <dd> 204 Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by 205 failure to accept supplied data in sufficient time. For details, refer to 206 <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>. 207 Compare to <em>underrun</em>. 208 </dd> 209 210 <dt>panning</dt> 211 <dd> 212 Direct a signal to a desired position within a stereo or multichannel field. 213 </dd> 214 215 <dt>PCM</dt> 216 <dd> 217 Pulse Code Modulation. Most common low-level encoding of digital audio. The 218 audio signal is sampled at a regular interval, called the sample rate, then 219 quantized to discrete values within a particular range depending on the bit 220 depth. For example, for 16-bit PCM the sample values are integers between 221 -32768 and +32767. 222 </dd> 223 224 <dt>ramp</dt> 225 <dd> 226 Gradually increase or decrease the level of a particular audio parameter, such 227 as the volume or the strength of an effect. A volume ramp is commonly applied 228 when pausing and resuming music to avoid a hard audible transition. 229 </dd> 230 231 <dt>sample</dt> 232 <dd> 233 Number representing the audio value for a single channel at a point in time. 234 </dd> 235 236 <dt>sample rate or frame rate</dt> 237 <dd> 238 Number of frames per second. While <em>frame rate</em> is more accurate, 239 <em>sample rate</em> is conventionally used to mean frame rate. 240 </dd> 241 242 <dt>sonification</dt> 243 <dd> 244 Use of sound to express feedback or information, such as touch sounds and 245 keyboard sounds. 246 </dd> 247 248 <dt>stereo</dt> 249 <dd> 250 Two channels. 251 </dd> 252 253 <dt>stereo widening</dt> 254 <dd> 255 Effect applied to a stereo signal to make another stereo signal that sounds 256 fuller and richer. The effect can also be applied to a mono signal, where it is 257 a type of upmixing. 258 </dd> 259 260 <dt>surround sound</dt> 261 <dd> 262 Techniques for increasing the ability of a listener to perceive sound position 263 beyond stereo left and right. 264 </dd> 265 266 <dt>transparency</dt> 267 <dd> 268 Ideal result of lossy data compression. Lossy data conversion is transparent if 269 it is perceptually indistinguishable from the original by a human subject. For 270 details, refer to 271 <a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>. 272 273 </dd> 274 275 <dt>underrun</dt> 276 <dd> 277 Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by 278 failure to supply needed data in sufficient time. For details, refer to 279 <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>. 280 Compare to <em>overrun</em>. 281 </dd> 282 283 <dt>upmixing</dt> 284 <dd> 285 Increase the number of channels, such as from mono to stereo or from stereo to 286 surround sound. Accomplished by duplication, panning, or more advanced signal 287 processing. Compare to <em>downmixing</em>. 288 </dd> 289 290 <dt>virtualizer</dt> 291 <dd> 292 Effect that attempts to spatialize audio channels, such as trying to simulate 293 more speakers or give the illusion that sound sources have position. 294 </dd> 295 296 <dt>volume</dt> 297 <dd> 298 Loudness, the subjective strength of an audio signal. 299 </dd> 300 301 </dl> 302 303 <h3 id="interDeviceTerms">Inter-device interconnect</h3> 304 305 <p> 306 Inter-device interconnection technologies connect audio and video components 307 between devices and are readily visible at the external connectors. The HAL 308 implementer and end user should be aware of these terms. 309 </p> 310 311 <dl> 312 313 <dt>Bluetooth</dt> 314 <dd> 315 Short range wireless technology. For details on the audio-related 316 <a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a> 317 and 318 <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>, 319 refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for 320 music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>. 321 </dd> 322 323 <dt>DisplayPort</dt> 324 <dd> 325 Digital display interface by the Video Electronics Standards Association (VESA). 326 </dd> 327 328 <dt>dongle</dt> 329 <dd> 330 A <a href="https://en.wikipedia.org/wiki/Dongle">dongle</a> 331 is a small gadget, especially one that hangs off another device. 332 </dd> 333 334 <dt>FireWire</dt> 335 <dd> 336 See IEEE 1394. 337 </dd> 338 339 <dt>HDMI</dt> 340 <dd> 341 High-Definition Multimedia Interface. Interface for transferring audio and 342 video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used. 343 </dd> 344 345 <dt>IEEE 1394</dt> 346 <dd> 347 <a href="https://en.wikipedia.org/wiki/IEEE_1394">IEEE 1394</a>, also called FireWire, 348 is a serial bus used for real-time low-latency applications such as audio. 349 </dd> 350 351 <dt>Intel HDA</dt> 352 <dd> 353 Intel High Definition Audio (do not confuse with generic <em>high-definition 354 audio</em> or <em>high-resolution audio</em>). Specification for a front-panel 355 connector. For details, refer to 356 <a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High 357 Definition Audio</a>. 358 </dd> 359 360 <dt>interface</dt> 361 <dd> 362 An <a href="https://en.wikipedia.org/wiki/Interface_(computing)">interface</a> 363 converts a signal from one representation to another. Common interfaces 364 include a USB audio interface and MIDI interface. 365 </dd> 366 367 <dt>line level</dt> 368 <dd> 369 <a href="http://en.wikipedia.org/wiki/Line_level">Line level</a> is the strength 370 of an analog audio signal that passes between audio components, not transducers. 371 </dd> 372 373 <dt>MHL</dt> 374 <dd> 375 Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB 376 connector. 377 </dd> 378 379 <dt>phone connector</dt> 380 <dd> 381 Mini or sub-mini component that connects a device to wired headphones, headset, 382 or line-level amplifier. 383 </dd> 384 385 <dt>SlimPort</dt> 386 <dd> 387 Adapter from micro-USB to HDMI. 388 </dd> 389 390 <dt>S/PDIF</dt> 391 <dd> 392 Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For 393 details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>. 394 S/PDIF is the consumer grade variant of <a href="https://en.wikipedia.org/wiki/AES3">AES3</a>. 395 </dd> 396 397 <dt>Thunderbolt</dt> 398 <dd> 399 Multimedia interface that competes with USB and HDMI for connecting to high-end 400 peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>. 401 </dd> 402 403 <dt>TOSLINK</dt> 404 <dd> 405 <a href="https://en.wikipedia.org/wiki/TOSLINK">TOSLINK</a> is an optical audio cable 406 used with <em>S/PDIF</em>. 407 </dd> 408 409 <dt>USB</dt> 410 <dd> 411 Universal Serial Bus. For details, refer to 412 <a href="http://en.wikipedia.org/wiki/USB">USB</a>. 413 </dd> 414 415 </dl> 416 417 <h3 id="intraDeviceTerms">Intra-device interconnect</h3> 418 419 <p> 420 Intra-device interconnection technologies connect internal audio components 421 within a given device and are not visible without disassembling the device. The 422 HAL implementer may need to be aware of these, but not the end user. For details 423 on intra-device interconnections, refer to the following articles: 424 </p> 425 <ul> 426 <li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li> 427 <li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">IC</a>, for control channel</li> 428 <li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">IS</a>, for audio data, simpler than SLIMbus</li> 429 <li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li> 430 <li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li> 431 <li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li> 432 <li><a href="http://en.wikipedia.org/wiki/AC%2797">AC'97</a></li> 433 <li><a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel HDA</a></li> 434 <li><a href="http://mipi.org/specifications/soundwire">SoundWire</a></li> 435 </ul> 436 437 <p> 438 In 439 <a href="http://www.alsa-project.org/main/index.php/ASoC">ALSA System on Chip (ASoC)</a>, 440 these are collectively called 441 <a href="https://www.kernel.org/doc/Documentation/sound/soc/dai.rst">Digital Audio Interfaces</a> 442 (DAI). 443 </p> 444 445 <h3 id="signalTerms">Audio Signal Path</h3> 446 447 <p> 448 Audio signal path terms relate to the signal path that audio data follows from 449 an application to the transducer or vice-versa. 450 </p> 451 452 <dl> 453 454 <dt>ADC</dt> 455 <dd> 456 Analog-to-digital converter. Module that converts an analog signal (continuous 457 in time and amplitude) to a digital signal (discrete in time and amplitude). 458 Conceptually, an ADC consists of a periodic sample-and-hold followed by a 459 quantizer, although it does not have to be implemented that way. An ADC is 460 usually preceded by a low-pass filter to remove any high frequency components 461 that are not representable using the desired sample rate. For details, refer to 462 <a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital 463 converter</a>. 464 </dd> 465 466 <dt>AP</dt> 467 <dd> 468 Application processor. Main general-purpose computer on a mobile device. 469 </dd> 470 471 <dt>codec</dt> 472 <dd> 473 Coder-decoder. Module that encodes and/or decodes an audio signal from one 474 representation to another (typically analog to PCM or PCM to analog). In strict 475 terms, <em>codec</em> is reserved for modules that both encode and decode but 476 can be used loosely to refer to only one of these. For details, refer to 477 <a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>. 478 </dd> 479 480 <dt>DAC</dt> 481 <dd> 482 Digital-to-analog converter. Module that converts a digital signal (discrete in 483 time and amplitude) to an analog signal (continuous in time and amplitude). 484 Often followed by a low-pass filter to remove high-frequency components 485 introduced by digital quantization. For details, refer to 486 <a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog 487 converter</a>. 488 </dd> 489 490 <dt>DSP</dt> 491 <dd> 492 Digital Signal Processor. Optional component typically located after the 493 application processor (for output) or before the application processor (for 494 input). Primary purpose is to off-load the application processor and provide 495 signal processing features at a lower power cost. 496 </dd> 497 498 <dt>PDM</dt> 499 <dd> 500 Pulse-density modulation. Form of modulation used to represent an analog signal 501 by a digital signal, where the relative density of 1s versus 0s indicates the 502 signal level. Commonly used by digital to analog converters. For details, refer 503 to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density 504 modulation</a>. 505 </dd> 506 507 <dt>PWM</dt> 508 <dd> 509 Pulse-width modulation. Form of modulation used to represent an analog signal by 510 a digital signal, where the relative width of a digital pulse indicates the 511 signal level. Commonly used by analog-to-digital converters. For details, refer 512 to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width 513 modulation</a>. 514 </dd> 515 516 <dt>transducer</dt> 517 <dd> 518 Converts variations in physical real-world quantities to electrical signals. In 519 audio, the physical quantity is sound pressure, and the transducers are the 520 loudspeaker and microphone. For details, refer to 521 <a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>. 522 </dd> 523 524 </dl> 525 526 <h3 id="srcTerms">Sample Rate Conversion</h3> 527 <p> 528 Sample rate conversion terms relate to the process of converting from one 529 sampling rate to another. 530 </p> 531 532 <dl> 533 534 <dt>downsample</dt> 535 <dd>Resample, where sink sample rate < source sample rate.</dd> 536 537 <dt>Nyquist frequency</dt> 538 <dd> 539 Maximum frequency component that can be represented by a discretized signal at 540 1/2 of a given sample rate. For example, the human hearing range extends to 541 approximately 20 kHz, so a digital audio signal must have a sample rate of at 542 least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and 543 48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz 544 respectively. For details, refer to 545 <a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a> 546 and 547 <a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>. 548 </dd> 549 550 <dt>resampler</dt> 551 <dd>Synonym for sample rate converter.</dd> 552 553 <dt>resampling</dt> 554 <dd>Process of converting sample rate.</dd> 555 556 <dt>sample rate converter</dt> 557 <dd>Module that resamples.</dd> 558 559 <dt>sink</dt> 560 <dd>Output of a resampler.</dd> 561 562 <dt>source</dt> 563 <dd>Input to a resampler.</dd> 564 565 <dt>upsample</dt> 566 <dd>Resample, where sink sample rate > source sample rate.</dd> 567 568 </dl> 569 570 <h2 id="androidSpecificTerms">Android-Specific Terms</h2> 571 572 <p> 573 Android-specific terms include terms used only in the Android audio framework 574 and generic terms that have special meaning within Android. 575 </p> 576 577 <dl> 578 579 <dt>ALSA</dt> 580 <dd> 581 Advanced Linux Sound Architecture. An audio framework for Linux that has also 582 influenced other systems. For a generic definition, refer to 583 <a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>. 584 In Android, ALSA refers to the kernel audio framework and drivers and not to the 585 user-mode API. See also <em>tinyalsa</em>. 586 </dd> 587 588 <dt>audio device</dt> 589 <dd> 590 Audio I/O endpoint backed by a HAL implementation. 591 </dd> 592 593 <dt>AudioEffect</dt> 594 <dd> 595 API and implementation framework for output (post-processing) effects and input 596 (pre-processing) effects. The API is defined at 597 <a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>. 598 </dd> 599 600 <dt>AudioFlinger</dt> 601 <dd> 602 Android sound server implementation. AudioFlinger runs within the mediaserver 603 process. For a generic definition, refer to 604 <a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>. 605 </dd> 606 607 <dt>audio focus</dt> 608 <dd> 609 Set of APIs for managing audio interactions across multiple independent apps. 610 For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of 611 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>. 612 </dd> 613 614 <dt>AudioMixer</dt> 615 <dd> 616 Module in AudioFlinger responsible for combining multiple tracks and applying 617 attenuation (volume) and effects. For a generic definition, refer to 618 <a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather 619 than a software module within a system). 620 </dd> 621 622 <dt>audio policy</dt> 623 <dd> 624 Service responsible for all actions that require a policy decision to be made 625 first, such as opening a new I/O stream, re-routing after a change, and stream 626 volume management. 627 </dd> 628 629 <dt>AudioRecord</dt> 630 <dd> 631 Primary low-level client API for receiving data from an audio input device such 632 as a microphone. The data is usually PCM format. The API is defined at 633 <a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>. 634 </dd> 635 636 <dt>AudioResampler</dt> 637 <dd> 638 Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>. 639 </dd> 640 641 <dt>audio source</dt> 642 <dd> 643 An enumeration of constants that indicates the desired use case for capturing 644 audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above, 645 <a href="attributes.html">audio attributes</a> are preferred. 646 </dd> 647 648 <dt>AudioTrack</dt> 649 <dd> 650 Primary low-level client API for sending data to an audio output device such as 651 a speaker. The data is usually in PCM format. The API is defined at 652 <a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>. 653 </dd> 654 655 <dt>audio_utils</dt> 656 <dd> 657 Audio utility library for features such as PCM format conversion, WAV file I/O, 658 and 659 <a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is 660 largely independent of the Android platform. 661 </dd> 662 663 <dt>client</dt> 664 <dd> 665 Usually an application or app client. However, an AudioFlinger client can be a 666 thread running within the mediaserver system process, such as when playing media 667 decoded by a MediaPlayer object. 668 </dd> 669 670 <dt>HAL</dt> 671 <dd> 672 Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a 673 layer between AudioFlinger and the kernel device driver with a C API (which 674 replaces the C++ libaudio). 675 </dd> 676 677 <dt>FastCapture</dt> 678 <dd> 679 Thread within AudioFlinger that sends audio data to lower latency fast tracks 680 and drives the input device when configured for reduced latency. 681 </dd> 682 683 <dt>FastMixer</dt> 684 <dd> 685 Thread within AudioFlinger that receives and mixes audio data from lower latency 686 fast tracks and drives the primary output device when configured for reduced 687 latency. 688 </dd> 689 690 <dt>fast track</dt> 691 <dd> 692 AudioTrack or AudioRecord client with lower latency but fewer features on some 693 devices and routes. 694 </dd> 695 696 <dt>MediaPlayer</dt> 697 <dd> 698 Higher-level client API than AudioTrack. Plays encoded content or content that 699 includes multimedia audio and video tracks. 700 </dd> 701 702 <dt>media.log</dt> 703 <dd> 704 AudioFlinger debugging feature available in custom builds only. Used for logging 705 audio events to a circular buffer where they can then be retroactively dumped 706 when needed. 707 </dd> 708 709 <dt>mediaserver</dt> 710 <dd> 711 Android system process that contains media-related services, including 712 AudioFlinger. 713 </dd> 714 715 <dt>NBAIO</dt> 716 <dd> 717 Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term 718 can be misleading as some implementations of the NBAIO API support blocking. The 719 key implementations of NBAIO are for different types of pipes. 720 </dd> 721 722 <dt>normal mixer</dt> 723 <dd> 724 Thread within AudioFlinger that services most full-featured AudioTrack clients. 725 Directly drives an output device or feeds its sub-mix into FastMixer via a pipe. 726 </dd> 727 728 <dt>OpenSL ES</dt> 729 <dd> 730 Audio API standard by 731 <a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since 732 API level 9 support a native audio API that is based on a subset of 733 <a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>. 734 </dd> 735 736 <dt>silent mode</dt> 737 <dd> 738 User-settable feature to mute the phone ringer and notifications without 739 affecting media playback (music, videos, games) or alarms. 740 </dd> 741 742 <dt>SoundPool</dt> 743 <dd> 744 Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for 745 triggering UI feedback, game sounds, etc. The API is defined at 746 <a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>. 747 </dd> 748 749 <dt>Stagefright</dt> 750 <dd> 751 See <a href="/devices/media.html">Media</a>. 752 </dd> 753 754 <dt>StateQueue</dt> 755 <dd> 756 Module within AudioFlinger responsible for synchronizing state among threads. 757 Whereas NBAIO is used to pass data, StateQueue is used to pass control 758 information. 759 </dd> 760 761 <dt>strategy</dt> 762 <dd> 763 Group of stream types with similar behavior. Used by the audio policy service. 764 </dd> 765 766 <dt>stream type</dt> 767 <dd> 768 Enumeration that expresses a use case for audio output. The audio policy 769 implementation uses the stream type, along with other parameters, to determine 770 volume and routing decisions. For a list of stream types, see 771 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>. 772 </dd> 773 774 <dt>tee sink</dt> 775 <dd> 776 See <a href="debugging.html#teeSink">Audio Debugging</a>. 777 </dd> 778 779 <dt>tinyalsa</dt> 780 <dd> 781 Small user-mode API above ALSA kernel with BSD license. Recommended for HAL 782 implementations. 783 </dd> 784 785 <dt>ToneGenerator</dt> 786 <dd> 787 Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF) 788 signals. For details, refer to 789 <a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone 790 multi-frequency signaling</a> and the API definition at 791 <a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>. 792 </dd> 793 794 <dt>track</dt> 795 <dd> 796 Audio stream. Controlled by the AudioTrack or AudioRecord API. 797 </dd> 798 799 <dt>volume attenuation curve</dt> 800 <dd> 801 Device-specific mapping from a generic volume index to a specific attenuation 802 factor for a given output. 803 </dd> 804 805 <dt>volume index</dt> 806 <dd> 807 Unitless integer that expresses the desired relative volume of a stream. The 808 volume-related APIs of 809 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a> 810 operate in volume indices rather than absolute attenuation factors. 811 </dd> 812 813 </dl> 814 815 </body> 816 </html> 817