Home | History | Annotate | Download | only in devices
      1 page.title=Audio Latency
      2 @jd:body
      3 
      4 <!--
      5     Copyright 2013 The Android Open Source Project
      6 
      7     Licensed under the Apache License, Version 2.0 (the "License");
      8     you may not use this file except in compliance with the License.
      9     You may obtain a copy of the License at
     10 
     11         http://www.apache.org/licenses/LICENSE-2.0
     12 
     13     Unless required by applicable law or agreed to in writing, software
     14     distributed under the License is distributed on an "AS IS" BASIS,
     15     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     16     See the License for the specific language governing permissions and
     17     limitations under the License.
     18 -->
     19 <div id="qv-wrapper">
     20   <div id="qv">
     21     <h2>In this document</h2>
     22     <ol id="auto-toc">
     23     </ol>
     24   </div>
     25 </div>
     26 
     27 <p>Audio latency is the time delay as an audio signal passes through a system.
     28   For a complete description of audio latency for the purposes of Android
     29   compatibility, see <em>Section 5.5 Audio Latency</em>
     30   in the <a href="http://source.android.com/compatibility/index.html">Android CDD</a>.
     31   See <a href="latency_design.html">Design For Reduced Latency</a> for an 
     32   understanding of Android's audio latency-reduction efforts.
     33 </p>
     34 
     35 <p>
     36   This page focuses on the contributors to output latency,
     37   but a similar discussion applies to input latency.
     38 </p>
     39 <p>
     40   Assuming the analog circuitry does not contribute significantly, then the major 
     41 surface-level contributors to audio latency are the following:
     42 </p>
     43 
     44 <ul>
     45   <li>Application</li>
     46   <li>Total number of buffers in pipeline</li>
     47   <li>Size of each buffer, in frames</li>
     48   <li>Additional latency after the app processor, such as from a DSP</li>
     49 </ul>
     50 
     51 <p>
     52   As accurate as the above list of contributors may be, it is also misleading.
     53   The reason is that buffer count and buffer size are more of an
     54   <em>effect</em> than a <em>cause</em>.  What usually happens is that
     55   a given buffer scheme is implemented and tested, but during testing, an audio
     56   underrun is heard as a "click" or "pop."  To compensate, the
     57   system designer then increases buffer sizes or buffer counts.
     58   This has the desired result of eliminating the underruns, but it also
     59   has the undesired side effect of increasing latency.
     60 </p>
     61 
     62 <p>
     63   A better approach is to understand the causes of the
     64   underruns and then correct those.  This eliminates the
     65   audible artifacts and may even permit even smaller or fewer buffers
     66   and thus reduce latency.
     67 </p>
     68 
     69 <p>
     70   In our experience, the most common causes of underruns include:
     71 </p>
     72 <ul>
     73   <li>Linux CFS (Completely Fair Scheduler)</li>
     74   <li>high-priority threads with SCHED_FIFO scheduling</li>
     75   <li>long scheduling latency</li>
     76   <li>long-running interrupt handlers</li>
     77   <li>long interrupt disable time</li>
     78 </ul>
     79 
     80 <h3>Linux CFS and SCHED_FIFO scheduling</h3>
     81 <p>
     82   The Linux CFS is designed to be fair to competing workloads sharing a common CPU
     83   resource. This fairness is represented by a per-thread <em>nice</em> parameter.
     84   The nice value ranges from -19 (least nice, or most CPU time allocated)
     85   to 20 (nicest, or least CPU time allocated). In general, all threads with a given
     86   nice value receive approximately equal CPU time and threads with a
     87   numerically lower nice value should expect to
     88   receive more CPU time. However, CFS is "fair" only over relatively long
     89   periods of observation. Over short-term observation windows,
     90   CFS may allocate the CPU resource in unexpected ways. For example, it
     91   may take the CPU away from a thread with numerically low niceness
     92   onto a thread with a numerically high niceness.  In the case of audio,
     93   this can result in an underrun.
     94 </p>
     95 
     96 <p>
     97   The obvious solution is to avoid CFS for high-performance audio
     98   threads. Beginning with Android 4.1, such threads now use the
     99   <code>SCHED_FIFO</code> scheduling policy rather than the <code>SCHED_NORMAL</code> (also called
    100   <code>SCHED_OTHER</code>) scheduling policy implemented by CFS.
    101 </p>
    102 
    103 <p>
    104   Though the high-performance audio threads now use <code>SCHED_FIFO</code>, they
    105   are still susceptible to other higher priority <code>SCHED_FIFO</code> threads.
    106   These are typically kernel worker threads, but there may also be a few
    107   non-audio user threads with policy <code>SCHED_FIFO</code>. The available <code>SCHED_FIFO</code>
    108   priorities range from 1 to 99.  The audio threads run at priority
    109   2 or 3.  This leaves priority 1 available for lower priority threads,
    110   and priorities 4 to 99 for higher priority threads.  We recommend 
    111   you use priority 1 whenever possible, and reserve priorities 4 to 99 for
    112   those threads that are guaranteed to complete within a bounded amount
    113   of time and are known to not interfere with scheduling of audio threads.
    114 </p>
    115 
    116 <h3>Scheduling latency</h3>
    117 <p>
    118   Scheduling latency is the time between when a thread becomes
    119   ready to run, and when the resulting context switch completes so that the
    120   thread actually runs on a CPU. The shorter the latency the better, and 
    121   anything over two milliseconds causes problems for audio. Long scheduling
    122   latency is most likely to occur during mode transitions, such as
    123   bringing up or shutting down a CPU, switching between a security kernel
    124   and the normal kernel, switching from full power to low-power mode,
    125   or adjusting the CPU clock frequency and voltage.
    126 </p>
    127 
    128 <h3>Interrupts</h3>
    129 <p>
    130   In many designs, CPU 0 services all external interrupts.  So a
    131   long-running interrupt handler may delay other interrupts, in particular
    132   audio direct memory access (DMA) completion interrupts. Design interrupt handlers
    133   to finish quickly and defer any lengthy work to a thread (preferably
    134   a CFS thread or <code>SCHED_FIFO</code> thread of priority 1).
    135 </p>
    136 
    137 <p>
    138   Equivalently, disabling interrupts on CPU 0 for a long period
    139   has the same result of delaying the servicing of audio interrupts.
    140   Long interrupt disable times typically happen while waiting for a kernel
    141   <i>spin lock</i>.  Review these spin locks to ensure that
    142   they are bounded.
    143 </p>
    144 
    145