Home | History | Annotate | Download | only in audio
      1 page.title=Audio Terminology
      2 @jd:body
      3 
      4 <!--
      5     Copyright 2015 The Android Open Source Project
      6 
      7     Licensed under the Apache License, Version 2.0 (the "License");
      8     you may not use this file except in compliance with the License.
      9     You may obtain a copy of the License at
     10 
     11         http://www.apache.org/licenses/LICENSE-2.0
     12 
     13     Unless required by applicable law or agreed to in writing, software
     14     distributed under the License is distributed on an "AS IS" BASIS,
     15     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     16     See the License for the specific language governing permissions and
     17     limitations under the License.
     18 -->
     19 <div id="qv-wrapper">
     20   <div id="qv">
     21     <h2>In this document</h2>
     22     <ol id="auto-toc">
     23     </ol>
     24   </div>
     25 </div>
     26 
     27 <p>
     28 This glossary of audio-related terminology includes widely-used generic terms
     29 and Android-specific terms.
     30 </p>
     31 
     32 <h2 id="genericTerm">Generic Terms</h2>
     33 
     34 <p>
     35 Generic audio-related terms have conventional meanings.
     36 </p>
     37 
     38 <h3 id="digitalAudioTerms">Digital Audio</h3>
     39 <p>
     40 Digital audio terms relate to handling sound using audio signals encoded
     41 in digital form. For details, refer to
     42 <a href="http://en.wikipedia.org/wiki/Digital_audio">Digital Audio</a>.
     43 </p>
     44 
     45 <dl>
     46 
     47 <dt>acoustics</dt>
     48 <dd>
     49 Study of the mechanical properties of sound, such as how the physical
     50 placement of transducers (speakers, microphones, etc.) on a device affects
     51 perceived audio quality.
     52 </dd>
     53 
     54 <dt>attenuation</dt>
     55 <dd>
     56 Multiplicative factor less than or equal to 1.0, applied to an audio signal
     57 to decrease the signal level. Compare to <em>gain</em>.
     58 </dd>
     59 
     60 <dt>audiophile</dt>
     61 <dd>
     62 Person concerned with a superior music reproduction experience, especially
     63 willing to make substantial tradeoffs (expense, component size, room design,
     64 etc.) for sound quality. For details, refer to
     65 <a href="http://en.wikipedia.org/wiki/Audiophile">audiophile</a>.
     66 </dd>
     67 
     68 <dt>bits per sample or bit depth</dt>
     69 <dd>
     70 Number of bits of information per sample.
     71 </dd>
     72 
     73 <dt>channel</dt>
     74 <dd>
     75 Single stream of audio information, usually corresponding to one location of
     76 recording or playback.
     77 </dd>
     78 
     79 <dt>downmixing</dt>
     80 <dd>
     81 Decrease the number of channels, such as from stereo to mono or from 5.1 to
     82 stereo. Accomplished by dropping channels, mixing channels, or more advanced
     83 signal processing. Simple mixing without attenuation or limiting has the
     84 potential for overflow and clipping. Compare to <em>upmixing</em>.
     85 </dd>
     86 
     87 <dt>DSD</dt>
     88 <dd>
     89 Direct Stream Digital. Proprietary audio encoding based on
     90 <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">pulse-density
     91 modulation</a>. While Pulse Code Modulation (PCM) encodes a waveform as a
     92 sequence of individual audio samples of multiple bits, DSD encodes a waveform as
     93 a sequence of bits at a very high sample rate (without the concept of samples).
     94 Both PCM and DSD represent multiple channels by independent sequences. DSD is
     95 better suited to content distribution than as an internal representation for
     96 processing as it can be difficult to apply traditional digital signal processing
     97 (DSP) algorithms to DSD. DSD is used in <a href="http://en.wikipedia.org/wiki/Super_Audio_CD">Super Audio CD (SACD)</a> and in DSD over PCM (DoP) for USB. For details, refer
     98 to <a href="http://en.wikipedia.org/wiki/Direct_Stream_Digital">Digital Stream
     99 Digital</a>.
    100 </dd>
    101 
    102 <dt>duck</dt>
    103 <dd>
    104 Temporarily reduce the volume of a stream when another stream becomes active.
    105 For example, if music is playing when a notification arrives, the music ducks
    106 while the notification plays. Compare to <em>mute</em>.
    107 </dd>
    108 
    109 <dt>FIFO</dt>
    110 <dd>
    111 First In, First Out. Hardware module or software data structure that implements
    112 <a href="http://en.wikipedia.org/wiki/FIFO">First In, First Out</a>
    113 queueing of data. In an audio context, the data stored in the queue are
    114 typically audio frames. FIFO can be implemented by a
    115 <a href="http://en.wikipedia.org/wiki/Circular_buffer">circular buffer</a>.
    116 </dd>
    117 
    118 <dt>frame</dt>
    119 <dd>
    120 Set of samples, one per channel, at a point in time.
    121 </dd>
    122 
    123 <dt>frames per buffer</dt>
    124 <dd>
    125 Number of frames handed from one module to the next at one time. The audio HAL
    126 interface uses the concept of frames per buffer.
    127 </dd>
    128 
    129 <dt>gain</dt>
    130 <dd>
    131 Multiplicative factor greater than or equal to 1.0, applied to an audio signal
    132 to increase the signal level. Compare to <em>attenuation</em>.
    133 </dd>
    134 
    135 <dt>HD audio</dt>
    136 <dd>
    137 High-Definition audio. Synonym for high-resolution audio (but different than
    138 Intel High Definition Audio).
    139 </dd>
    140 
    141 <dt>Hz</dt>
    142 <dd>
    143 Units for sample rate or frame rate.
    144 </dd>
    145 
    146 <dt>high-resolution audio</dt>
    147 <dd>
    148 Representation with greater bit-depth and sample rate than CDs (stereo 16-bit
    149 PCM at 44.1 kHz) and without lossy data compression. Equivalent to HD audio.
    150 For details, refer to
    151 <a href="http://en.wikipedia.org/wiki/High-resolution_audio">high-resolution
    152 audio</a>.
    153 </dd>
    154 
    155 <dt>latency</dt>
    156 <dd>
    157 Time delay as a signal passes through a system.
    158 </dd>
    159 
    160 <dt>lossless</dt>
    161 <dd>
    162 A <a href="http://en.wikipedia.org/wiki/Lossless_compression">lossless data
    163 compression algorithm</a> that preserves bit accuracy across encoding and
    164 decoding, where the result of decoding previously encoded data is equivalent
    165 to the original data. Examples of lossless audio content distribution formats
    166 include <a href="http://en.wikipedia.org/wiki/Compact_disc">CDs</a>, PCM within
    167 <a href="http://en.wikipedia.org/wiki/WAV">WAV</a>, and
    168 <a href="http://en.wikipedia.org/wiki/FLAC">FLAC</a>.
    169 The authoring process may reduce the bit depth or sample rate from that of the
    170 <a href="http://en.wikipedia.org/wiki/Audio_mastering">masters</a>; distribution
    171 formats that preserve the resolution and bit accuracy of masters are the subject
    172 of high-resolution audio.
    173 </dd>
    174 
    175 <dt>lossy</dt>
    176 <dd>
    177 A <a href="http://en.wikipedia.org/wiki/Lossy_compression">lossy data
    178 compression algorithm</a> that attempts to preserve the most important features
    179 of media across encoding and decoding where the result of decoding previously
    180 encoded data is perceptually similar to the original data but not identical.
    181 Examples of lossy audio compression algorithms include MP3 and AAC. As analog
    182 values are from a continuous domain and digital values are discrete, ADC and DAC
    183 are lossy conversions with respect to amplitude. See also <em>transparency</em>.
    184 </dd>
    185 
    186 <dt>mono</dt>
    187 <dd>
    188 One channel.
    189 </dd>
    190 
    191 <dt>multichannel</dt>
    192 <dd>
    193 See <em>surround sound</em>. In strict terms, <em>stereo</em> is more than one
    194 channel and could be considered multichannel; however, such usage is confusing
    195 and thus avoided.
    196 </dd>
    197 
    198 <dt>mute</dt>
    199 <dd>
    200 Temporarily force volume to be zero, independent from the usual volume controls.
    201 </dd>
    202 
    203 <dt>overrun</dt>
    204 <dd>
    205 Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
    206 failure to accept supplied data in sufficient time. For details, refer to
    207 <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
    208 Compare to <em>underrun</em>.
    209 </dd>
    210 
    211 <dt>panning</dt>
    212 <dd>
    213 Direct a signal to a desired position within a stereo or multichannel field.
    214 </dd>
    215 
    216 <dt>PCM</dt>
    217 <dd>
    218 Pulse Code Modulation. Most common low-level encoding of digital audio. The
    219 audio signal is sampled at a regular interval, called the sample rate, then
    220 quantized to discrete values within a particular range depending on the bit
    221 depth. For example, for 16-bit PCM the sample values are integers between
    222 -32768 and +32767.
    223 </dd>
    224 
    225 <dt>ramp</dt>
    226 <dd>
    227 Gradually increase or decrease the level of a particular audio parameter, such
    228 as the volume or the strength of an effect. A volume ramp is commonly applied
    229 when pausing and resuming music to avoid a hard audible transition.
    230 </dd>
    231 
    232 <dt>sample</dt>
    233 <dd>
    234 Number representing the audio value for a single channel at a point in time.
    235 </dd>
    236 
    237 <dt>sample rate or frame rate</dt>
    238 <dd>
    239 Number of frames per second. While <em>frame rate</em> is more accurate,
    240 <em>sample rate</em> is conventionally used to mean frame rate.
    241 </dd>
    242 
    243 <dt>sonification</dt>
    244 <dd>
    245 Use of sound to express feedback or information, such as touch sounds and
    246 keyboard sounds.
    247 </dd>
    248 
    249 <dt>stereo</dt>
    250 <dd>
    251 Two channels.
    252 </dd>
    253 
    254 <dt>stereo widening</dt>
    255 <dd>
    256 Effect applied to a stereo signal to make another stereo signal that sounds
    257 fuller and richer. The effect can also be applied to a mono signal, where it is
    258 a type of upmixing.
    259 </dd>
    260 
    261 <dt>surround sound</dt>
    262 <dd>
    263 Techniques for increasing the ability of a listener to perceive sound position
    264 beyond stereo left and right.
    265 </dd>
    266 
    267 <dt>transparency</dt>
    268 <dd>
    269 Ideal result of lossy data compression. Lossy data conversion is transparent if
    270 it is perceptually indistinguishable from the original by a human subject. For
    271 details, refer to
    272 <a href="http://en.wikipedia.org/wiki/Transparency_%28data_compression%29">Transparency</a>.
    273 
    274 </dd>
    275 
    276 <dt>underrun</dt>
    277 <dd>
    278 Audible <a href="http://en.wikipedia.org/wiki/Glitch">glitch</a> caused by
    279 failure to supply needed data in sufficient time. For details, refer to
    280 <a href="http://en.wikipedia.org/wiki/Buffer_underrun">buffer underrun</a>.
    281 Compare to <em>overrun</em>.
    282 </dd>
    283 
    284 <dt>upmixing</dt>
    285 <dd>
    286 Increase the number of channels, such as from mono to stereo or from stereo to
    287 surround sound. Accomplished by duplication, panning, or more advanced signal
    288 processing. Compare to <em>downmixing</em>.
    289 </dd>
    290 
    291 <dt>virtualizer</dt>
    292 <dd>
    293 Effect that attempts to spatialize audio channels, such as trying to simulate
    294 more speakers or give the illusion that sound sources have position.
    295 </dd>
    296 
    297 <dt>volume</dt>
    298 <dd>
    299 Loudness, the subjective strength of an audio signal.
    300 </dd>
    301 
    302 </dl>
    303 
    304 <h3 id="interDeviceTerms">Inter-device interconnect</h3>
    305 
    306 <p>
    307 Inter-device interconnection technologies connect audio and video components
    308 between devices and are readily visible at the external connectors. The HAL
    309 implementer and end user should be aware of these terms.
    310 </p>
    311 
    312 <dl>
    313 
    314 <dt>Bluetooth</dt>
    315 <dd>
    316 Short range wireless technology. For details on the audio-related
    317 <a href="http://en.wikipedia.org/wiki/Bluetooth_profile">Bluetooth profiles</a>
    318 and
    319 <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols">Bluetooth protocols</a>,
    320 refer to <a href="http://en.wikipedia.org/wiki/Bluetooth_profile#Advanced_Audio_Distribution_Profile_.28A2DP.29">A2DP</a> for
    321 music, <a href="http://en.wikipedia.org/wiki/Bluetooth_protocols#Synchronous_connection-oriented_.28SCO.29_link">SCO</a> for telephony, and <a href="http://en.wikipedia.org/wiki/List_of_Bluetooth_profiles#Audio.2FVideo_Remote_Control_Profile_.28AVRCP.29">Audio/Video Remote Control Profile (AVRCP)</a>.
    322 </dd>
    323 
    324 <dt>DisplayPort</dt>
    325 <dd>
    326 Digital display interface by the Video Electronics Standards Association (VESA).
    327 </dd>
    328 
    329 <dt>dongle</dt>
    330 <dd>
    331 A <a href="https://en.wikipedia.org/wiki/Dongle">dongle</a>
    332 is a small gadget, especially one that hangs off another device.
    333 </dd>
    334 
    335 <dt>HDMI</dt>
    336 <dd>
    337 High-Definition Multimedia Interface. Interface for transferring audio and
    338 video data. For mobile devices, a micro-HDMI (type D) or MHL connector is used.
    339 </dd>
    340 
    341 <dt>Intel HDA</dt>
    342 <dd>
    343 Intel High Definition Audio (do not confuse with generic <em>high-definition
    344 audio</em> or <em>high-resolution audio</em>). Specification for a front-panel
    345 connector. For details, refer to
    346 <a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel High
    347 Definition Audio</a>.
    348 </dd>
    349 
    350 <dt>interface</dt>
    351 <dd>
    352 An <a href="https://en.wikipedia.org/wiki/Interface_(computing)">interface</a>
    353 converts a signal from one representation to another.  Common interfaces
    354 include a USB audio interface and MIDI interface.
    355 </dd>
    356 
    357 <dt>line level</dt>
    358 <dd>
    359 <a href="http://en.wikipedia.org/wiki/Line_level">Line level</a> is the strength
    360 of an analog audio signal that passes between audio components, not transducers.
    361 </dd>
    362 
    363 <dt>MHL</dt>
    364 <dd>
    365 Mobile High-Definition Link. Mobile audio/video interface, often over micro-USB
    366 connector.
    367 </dd>
    368 
    369 <dt>phone connector</dt>
    370 <dd>
    371 Mini or sub-mini component that connects a device to wired headphones, headset,
    372 or line-level amplifier.
    373 </dd>
    374 
    375 <dt>SlimPort</dt>
    376 <dd>
    377 Adapter from micro-USB to HDMI.
    378 </dd>
    379 
    380 <dt>S/PDIF</dt>
    381 <dd>
    382 Sony/Philips Digital Interface Format. Interconnect for uncompressed PCM. For
    383 details, refer to <a href="http://en.wikipedia.org/wiki/S/PDIF">S/PDIF</a>.
    384 S/PDIF is the consumer grade variant of <a href="https://en.wikipedia.org/wiki/AES3">AES3</a>.
    385 </dd>
    386 
    387 <dt>Thunderbolt</dt>
    388 <dd>
    389 Multimedia interface that competes with USB and HDMI for connecting to high-end
    390 peripherals. For details, refer to <a href="http://en.wikipedia.org/wiki/Thunderbolt_%28interface%29">Thunderbolt</a>.
    391 </dd>
    392 
    393 <dt>TOSLINK</dt>
    394 <dd>
    395 <a href="https://en.wikipedia.org/wiki/TOSLINK">TOSLINK</a> is an optical audio cable
    396 used with <em>S/PDIF</em>.
    397 </dd>
    398 
    399 <dt>USB</dt>
    400 <dd>
    401 Universal Serial Bus. For details, refer to
    402 <a href="http://en.wikipedia.org/wiki/USB">USB</a>.
    403 </dd>
    404 
    405 </dl>
    406 
    407 <h3 id="intraDeviceTerms">Intra-device interconnect</h3>
    408 
    409 <p>
    410 Intra-device interconnection technologies connect internal audio components
    411 within a given device and are not visible without disassembling the device. The
    412 HAL implementer may need to be aware of these, but not the end user. For details
    413 on intra-device interconnections, refer to the following articles:
    414 </p>
    415 <ul>
    416 <li><a href="http://en.wikipedia.org/wiki/General-purpose_input/output">GPIO</a></li>
    417 <li><a href="http://en.wikipedia.org/wiki/I%C2%B2C">IC</a>, for control channel</li>
    418 <li><a href="http://en.wikipedia.org/wiki/I%C2%B2S">IS</a>, for audio data, simpler than SLIMbus</li>
    419 <li><a href="http://en.wikipedia.org/wiki/McASP">McASP</a></li>
    420 <li><a href="http://en.wikipedia.org/wiki/SLIMbus">SLIMbus</a></li>
    421 <li><a href="http://en.wikipedia.org/wiki/Serial_Peripheral_Interface_Bus">SPI</a></li>
    422 <li><a href="http://en.wikipedia.org/wiki/AC%2797">AC'97</a></li>
    423 <li><a href="http://en.wikipedia.org/wiki/Intel_High_Definition_Audio">Intel HDA</a></li>
    424 <li><a href="http://mipi.org/specifications/soundwire">SoundWire</a></li>
    425 </ul>
    426 
    427 <p>
    428 In
    429 <a href="http://www.alsa-project.org/main/index.php/ASoC">ALSA System on Chip (ASoC)</a>,
    430 these are collectively called
    431 <a href="https://www.kernel.org/doc/Documentation/sound/alsa/soc/DAI.txt">Digital Audio Interfaces</a>
    432 (DAI).
    433 </p>
    434 
    435 <h3 id="signalTerms">Audio Signal Path</h3>
    436 
    437 <p>
    438 Audio signal path terms relate to the signal path that audio data follows from
    439 an application to the transducer or vice-versa.
    440 </p>
    441 
    442 <dl>
    443 
    444 <dt>ADC</dt>
    445 <dd>
    446 Analog-to-digital converter. Module that converts an analog signal (continuous
    447 in time and amplitude) to a digital signal (discrete in time and amplitude).
    448 Conceptually, an ADC consists of a periodic sample-and-hold followed by a
    449 quantizer, although it does not have to be implemented that way. An ADC is
    450 usually preceded by a low-pass filter to remove any high frequency components
    451 that are not representable using the desired sample rate. For details, refer to
    452 <a href="http://en.wikipedia.org/wiki/Analog-to-digital_converter">Analog-to-digital
    453 converter</a>.
    454 </dd>
    455 
    456 <dt>AP</dt>
    457 <dd>
    458 Application processor. Main general-purpose computer on a mobile device.
    459 </dd>
    460 
    461 <dt>codec</dt>
    462 <dd>
    463 Coder-decoder. Module that encodes and/or decodes an audio signal from one
    464 representation to another (typically analog to PCM or PCM to analog). In strict
    465 terms, <em>codec</em> is reserved for modules that both encode and decode but
    466 can be used loosely to refer to only one of these. For details, refer to
    467 <a href="http://en.wikipedia.org/wiki/Audio_codec">Audio codec</a>.
    468 </dd>
    469 
    470 <dt>DAC</dt>
    471 <dd>
    472 Digital-to-analog converter. Module that converts a digital signal (discrete in
    473 time and amplitude) to an analog signal (continuous in time and amplitude).
    474 Often followed by a low-pass filter to remove high-frequency components
    475 introduced by digital quantization. For details, refer to
    476 <a href="http://en.wikipedia.org/wiki/Digital-to-analog_converter">Digital-to-analog
    477 converter</a>.
    478 </dd>
    479 
    480 <dt>DSP</dt>
    481 <dd>
    482 Digital Signal Processor. Optional component typically located after the
    483 application processor (for output) or before the application processor (for
    484 input). Primary purpose is to off-load the application processor and provide
    485 signal processing features at a lower power cost.
    486 </dd>
    487 
    488 <dt>PDM</dt>
    489 <dd>
    490 Pulse-density modulation. Form of modulation used to represent an analog signal
    491 by a digital signal, where the relative density of 1s versus 0s indicates the
    492 signal level. Commonly used by digital to analog converters. For details, refer
    493 to <a href="http://en.wikipedia.org/wiki/Pulse-density_modulation">Pulse-density
    494 modulation</a>.
    495 </dd>
    496 
    497 <dt>PWM</dt>
    498 <dd>
    499 Pulse-width modulation. Form of modulation used to represent an analog signal by
    500 a digital signal, where the relative width of a digital pulse indicates the
    501 signal level. Commonly used by analog-to-digital converters. For details, refer
    502 to <a href="http://en.wikipedia.org/wiki/Pulse-width_modulation">Pulse-width
    503 modulation</a>.
    504 </dd>
    505 
    506 <dt>transducer</dt>
    507 <dd>
    508 Converts variations in physical real-world quantities to electrical signals. In
    509 audio, the physical quantity is sound pressure, and the transducers are the
    510 loudspeaker and microphone. For details, refer to
    511 <a href="http://en.wikipedia.org/wiki/Transducer">Transducer</a>.
    512 </dd>
    513 
    514 </dl>
    515 
    516 <h3 id="srcTerms">Sample Rate Conversion</h3>
    517 <p>
    518 Sample rate conversion terms relate to the process of converting from one
    519 sampling rate to another.
    520 </p>
    521 
    522 <dl>
    523 
    524 <dt>downsample</dt>
    525 <dd>Resample, where sink sample rate &lt; source sample rate.</dd>
    526 
    527 <dt>Nyquist frequency</dt>
    528 <dd>
    529 Maximum frequency component that can be represented by a discretized signal at
    530 1/2 of a given sample rate. For example, the human hearing range extends to
    531 approximately 20 kHz, so a digital audio signal must have a sample rate of at
    532 least 40 kHz to represent that range. In practice, sample rates of 44.1 kHz and
    533 48 kHz are commonly used, with Nyquist frequencies of 22.05 kHz and 24 kHz
    534 respectively. For details, refer to
    535 <a href="http://en.wikipedia.org/wiki/Nyquist_frequency">Nyquist frequency</a>
    536 and
    537 <a href="http://en.wikipedia.org/wiki/Hearing_range">Hearing range</a>.
    538 </dd>
    539 
    540 <dt>resampler</dt>
    541 <dd>Synonym for sample rate converter.</dd>
    542 
    543 <dt>resampling</dt>
    544 <dd>Process of converting sample rate.</dd>
    545 
    546 <dt>sample rate converter</dt>
    547 <dd>Module that resamples.</dd>
    548 
    549 <dt>sink</dt>
    550 <dd>Output of a resampler.</dd>
    551 
    552 <dt>source</dt>
    553 <dd>Input to a resampler.</dd>
    554 
    555 <dt>upsample</dt>
    556 <dd>Resample, where sink sample rate &gt; source sample rate.</dd>
    557 
    558 </dl>
    559 
    560 <h2 id="androidSpecificTerms">Android-Specific Terms</h2>
    561 
    562 <p>
    563 Android-specific terms include terms used only in the Android audio framework
    564 and generic terms that have special meaning within Android.
    565 </p>
    566 
    567 <dl>
    568 
    569 <dt>ALSA</dt>
    570 <dd>
    571 Advanced Linux Sound Architecture. An audio framework for Linux that has also
    572 influenced other systems. For a generic definition, refer to
    573 <a href="http://en.wikipedia.org/wiki/Advanced_Linux_Sound_Architecture">ALSA</a>.
    574 In Android, ALSA refers to the kernel audio framework and drivers and not to the
    575 user-mode API. See also <em>tinyalsa</em>.
    576 </dd>
    577 
    578 <dt>audio device</dt>
    579 <dd>
    580 Audio I/O endpoint backed by a HAL implementation.
    581 </dd>
    582 
    583 <dt>AudioEffect</dt>
    584 <dd>
    585 API and implementation framework for output (post-processing) effects and input
    586 (pre-processing) effects. The API is defined at
    587 <a href="http://developer.android.com/reference/android/media/audiofx/AudioEffect.html">android.media.audiofx.AudioEffect</a>.
    588 </dd>
    589 
    590 <dt>AudioFlinger</dt>
    591 <dd>
    592 Android sound server implementation. AudioFlinger runs within the mediaserver
    593 process. For a generic definition, refer to
    594 <a href="http://en.wikipedia.org/wiki/Sound_server">Sound server</a>.
    595 </dd>
    596 
    597 <dt>audio focus</dt>
    598 <dd>
    599 Set of APIs for managing audio interactions across multiple independent apps.
    600 For details, see <a href="http://developer.android.com/training/managing-audio/audio-focus.html">Managing Audio Focus</a> and the focus-related methods and constants of
    601 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
    602 </dd>
    603 
    604 <dt>AudioMixer</dt>
    605 <dd>
    606 Module in AudioFlinger responsible for combining multiple tracks and applying
    607 attenuation (volume) and effects. For a generic definition, refer to
    608 <a href="http://en.wikipedia.org/wiki/Audio_mixing_(recorded_music)">Audio mixing (recorded music)</a> (discusses a mixer as a hardware device or software application, rather
    609 than a software module within a system).
    610 </dd>
    611 
    612 <dt>audio policy</dt>
    613 <dd>
    614 Service responsible for all actions that require a policy decision to be made
    615 first, such as opening a new I/O stream, re-routing after a change, and stream
    616 volume management.
    617 </dd>
    618 
    619 <dt>AudioRecord</dt>
    620 <dd>
    621 Primary low-level client API for receiving data from an audio input device such
    622 as a microphone. The data is usually PCM format. The API is defined at
    623 <a href="http://developer.android.com/reference/android/media/AudioRecord.html">android.media.AudioRecord</a>.
    624 </dd>
    625 
    626 <dt>AudioResampler</dt>
    627 <dd>
    628 Module in AudioFlinger responsible for <a href="src.html">sample rate conversion</a>.
    629 </dd>
    630 
    631 <dt>audio source</dt>
    632 <dd>
    633 An enumeration of constants that indicates the desired use case for capturing
    634 audio input. For details, see <a href="http://developer.android.com/reference/android/media/MediaRecorder.AudioSource.html">audio source</a>. As of API level 21 and above,
    635 <a href="attributes.html">audio attributes</a> are preferred.
    636 </dd>
    637 
    638 <dt>AudioTrack</dt>
    639 <dd>
    640 Primary low-level client API for sending data to an audio output device such as
    641 a speaker. The data is usually in PCM format. The API is defined at
    642 <a href="http://developer.android.com/reference/android/media/AudioTrack.html">android.media.AudioTrack</a>.
    643 </dd>
    644 
    645 <dt>audio_utils</dt>
    646 <dd>
    647 Audio utility library for features such as PCM format conversion, WAV file I/O,
    648 and
    649 <a href="avoiding_pi.html#nonBlockingAlgorithms">non-blocking FIFO</a>, which is
    650 largely independent of the Android platform.
    651 </dd>
    652 
    653 <dt>client</dt>
    654 <dd>
    655 Usually an application or app client. However, an AudioFlinger client can be a
    656 thread running within the mediaserver system process, such as when playing media
    657 decoded by a MediaPlayer object.
    658 </dd>
    659 
    660 <dt>HAL</dt>
    661 <dd>
    662 Hardware Abstraction Layer. HAL is a generic term in Android; in audio, it is a
    663 layer between AudioFlinger and the kernel device driver with a C API (which
    664 replaces the C++ libaudio).
    665 </dd>
    666 
    667 <dt>FastCapture</dt>
    668 <dd>
    669 Thread within AudioFlinger that sends audio data to lower latency fast tracks
    670 and drives the input device when configured for reduced latency.
    671 </dd>
    672 
    673 <dt>FastMixer</dt>
    674 <dd>
    675 Thread within AudioFlinger that receives and mixes audio data from lower latency
    676 fast tracks and drives the primary output device when configured for reduced
    677 latency.
    678 </dd>
    679 
    680 <dt>fast track</dt>
    681 <dd>
    682 AudioTrack or AudioRecord client with lower latency but fewer features on some
    683 devices and routes.
    684 </dd>
    685 
    686 <dt>MediaPlayer</dt>
    687 <dd>
    688 Higher-level client API than AudioTrack. Plays encoded content or content that
    689 includes multimedia audio and video tracks.
    690 </dd>
    691 
    692 <dt>media.log</dt>
    693 <dd>
    694 AudioFlinger debugging feature available in custom builds only. Used for logging
    695 audio events to a circular buffer where they can then be retroactively dumped
    696 when needed.
    697 </dd>
    698 
    699 <dt>mediaserver</dt>
    700 <dd>
    701 Android system process that contains media-related services, including
    702 AudioFlinger.
    703 </dd>
    704 
    705 <dt>NBAIO</dt>
    706 <dd>
    707 Non-blocking audio input/output. Abstraction for AudioFlinger ports. The term
    708 can be misleading as some implementations of the NBAIO API support blocking. The
    709 key implementations of NBAIO are for different types of pipes.
    710 </dd>
    711 
    712 <dt>normal mixer</dt>
    713 <dd>
    714 Thread within AudioFlinger that services most full-featured AudioTrack clients.
    715 Directly drives an output device or feeds its sub-mix into FastMixer via a pipe.
    716 </dd>
    717 
    718 <dt>OpenSL ES</dt>
    719 <dd>
    720 Audio API standard by
    721 <a href="http://www.khronos.org/">The Khronos Group</a>. Android versions since
    722 API level 9 support a native audio API that is based on a subset of
    723 <a href="http://www.khronos.org/opensles/">OpenSL ES 1.0.1</a>.
    724 </dd>
    725 
    726 <dt>silent mode</dt>
    727 <dd>
    728 User-settable feature to mute the phone ringer and notifications without
    729 affecting media playback (music, videos, games) or alarms.
    730 </dd>
    731 
    732 <dt>SoundPool</dt>
    733 <dd>
    734 Higher-level client API than AudioTrack. Plays sampled audio clips. Useful for
    735 triggering UI feedback, game sounds, etc. The API is defined at
    736 <a href="http://developer.android.com/reference/android/media/SoundPool.html">android.media.SoundPool</a>.
    737 </dd>
    738 
    739 <dt>Stagefright</dt>
    740 <dd>
    741 See <a href="{@docRoot}devices/media.html">Media</a>.
    742 </dd>
    743 
    744 <dt>StateQueue</dt>
    745 <dd>
    746 Module within AudioFlinger responsible for synchronizing state among threads.
    747 Whereas NBAIO is used to pass data, StateQueue is used to pass control
    748 information.
    749 </dd>
    750 
    751 <dt>strategy</dt>
    752 <dd>
    753 Group of stream types with similar behavior. Used by the audio policy service.
    754 </dd>
    755 
    756 <dt>stream type</dt>
    757 <dd>
    758 Enumeration that expresses a use case for audio output. The audio policy
    759 implementation uses the stream type, along with other parameters, to determine
    760 volume and routing decisions. For a list of stream types, see
    761 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>.
    762 </dd>
    763 
    764 <dt>tee sink</dt>
    765 <dd>
    766 See <a href="debugging.html#teeSink">Audio Debugging</a>.
    767 </dd>
    768 
    769 <dt>tinyalsa</dt>
    770 <dd>
    771 Small user-mode API above ALSA kernel with BSD license. Recommended for HAL
    772 implementations.
    773 </dd>
    774 
    775 <dt>ToneGenerator</dt>
    776 <dd>
    777 Higher-level client API than AudioTrack. Plays dual-tone multi-frequency (DTMF)
    778 signals. For details, refer to
    779 <a href="http://en.wikipedia.org/wiki/Dual-tone_multi-frequency_signaling">Dual-tone
    780 multi-frequency signaling</a> and the API definition at
    781 <a href="http://developer.android.com/reference/android/media/ToneGenerator.html">android.media.ToneGenerator</a>.
    782 </dd>
    783 
    784 <dt>track</dt>
    785 <dd>
    786 Audio stream. Controlled by the AudioTrack or AudioRecord API.
    787 </dd>
    788 
    789 <dt>volume attenuation curve</dt>
    790 <dd>
    791 Device-specific mapping from a generic volume index to a specific attenuation
    792 factor for a given output.
    793 </dd>
    794 
    795 <dt>volume index</dt>
    796 <dd>
    797 Unitless integer that expresses the desired relative volume of a stream. The
    798 volume-related APIs of
    799 <a href="http://developer.android.com/reference/android/media/AudioManager.html">android.media.AudioManager</a>
    800 operate in volume indices rather than absolute attenuation factors.
    801 </dd>
    802 
    803 </dl>
    804