Home | History | Annotate | Download | only in graphics
      1 page.title=Graphics
      2 @jd:body
      3 
      4 <!--
      5     Copyright 2015 The Android Open Source Project
      6 
      7     Licensed under the Apache License, Version 2.0 (the "License");
      8     you may not use this file except in compliance with the License.
      9     You may obtain a copy of the License at
     10 
     11         http://www.apache.org/licenses/LICENSE-2.0
     12 
     13     Unless required by applicable law or agreed to in writing, software
     14     distributed under the License is distributed on an "AS IS" BASIS,
     15     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     16     See the License for the specific language governing permissions and
     17     limitations under the License.
     18 -->
     19 
     20 <div id="qv-wrapper">
     21   <div id="qv">
     22     <h2>In this document</h2>
     23     <ol id="auto-toc">
     24     </ol>
     25   </div>
     26 </div>
     27 
     28 <img style="float: right; margin: 0px 15px 15px 15px;"
     29 src="images/ape_fwk_hal_graphics.png" alt="Android Graphics HAL icon"/>
     30 
     31 <p>The Android framework offers a variety of graphics rendering APIs for 2D and
     32 3D that interact with manufacturer implementations of graphics drivers, so it
     33 is important to have a good understanding of how those APIs work at a higher
     34 level. This page introduces the graphics hardware abstraction layer (HAL) upon
     35 which those drivers are built.</p>
     36 
     37 <p>Application developers draw images to the screen in two ways: with Canvas or
     38 OpenGL. See <a
     39 href="{@docRoot}devices/graphics/architecture.html">System-level graphics
     40 architecture</a> for a detailed description of Android graphics
     41 components.</p>
     42 
     43 <p><a
     44 href="http://developer.android.com/reference/android/graphics/Canvas.html">android.graphics.Canvas</a>
     45 is a 2D graphics API and is the most popular graphics API among developers.
     46 Canvas operations draw all the stock and custom <a
     47 href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s
     48 in Android. In Android, hardware acceleration for Canvas APIs is accomplished
     49 with a drawing library called OpenGLRenderer that translates Canvas operations
     50 to OpenGL operations so they can execute on the GPU.</p>
     51 
     52 <p>Beginning in Android 4.0, hardware-accelerated Canvas is enabled by default.
     53 Consequently, a hardware GPU that supports OpenGL ES 2.0 is mandatory for
     54 Android 4.0 and later devices. See the
     55 <a href="https://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware Acceleration guide</a> for an explanation of how the
     56 hardware-accelerated drawing path works and the differences in its behavior
     57 from that of the software drawing path.</p>
     58 
     59 <p>In addition to Canvas, the other main way that developers render graphics is
     60 by using OpenGL ES to directly render to a surface. Android provides OpenGL ES
     61 interfaces in the
     62 <a href="http://developer.android.com/reference/android/opengl/package-summary.html">android.opengl</a>
     63 package that developers can use to call into their GL implementations with the
     64 SDK or with native APIs provided in the <a
     65 href="https://developer.android.com/tools/sdk/ndk/index.html">Android
     66 NDK</a>.</p>
     67 
     68 <p>Android implementers can test OpenGL ES functionality using the <a href="testing.html">drawElements Quality Program</a>, also known as deqp.</p>
     69 
     70 <h2 id="android_graphics_components">Android graphics components</h2>
     71 
     72 <p>No matter what rendering API developers use, everything is rendered onto a
     73 "surface." The surface represents the producer side of a buffer queue that is
     74 often consumed by SurfaceFlinger. Every window that is created on the Android
     75 platform is backed by a surface. All of the visible surfaces rendered are
     76 composited onto the display by SurfaceFlinger.</p>
     77 
     78 <p>The following diagram shows how the key components work together:</p>
     79 
     80 <img src="images/ape_fwk_graphics.png" alt="image-rendering components">
     81 
     82 <p class="img-caption"><strong>Figure 1.</strong> How surfaces are rendered</p>
     83 
     84 <p>The main components are described below:</p>
     85 
     86 <h3 id="image_stream_producers">Image Stream Producers</h3>
     87 
     88 <p>An image stream producer can be anything that produces graphic buffers for
     89 consumption. Examples include OpenGL ES, Canvas 2D, and mediaserver video
     90 decoders.</p>
     91 
     92 <h3 id="image_stream_consumers">Image Stream Consumers</h3>
     93 
     94 <p>The most common consumer of image streams is SurfaceFlinger, the system
     95 service that consumes the currently visible surfaces and composites them onto
     96 the display using information provided by the Window Manager. SurfaceFlinger is
     97 the only service that can modify the content of the display. SurfaceFlinger
     98 uses OpenGL and the Hardware Composer to compose a group of surfaces.</p>
     99 
    100 <p>Other OpenGL ES apps can consume image streams as well, such as the camera
    101 app consuming a camera preview image stream. Non-GL applications can be
    102 consumers too, for example the ImageReader class.</p>
    103 
    104 <h3 id="window_manager">Window Manager</h3>
    105 
    106 <p>The Android system service that controls a window, which is a container for
    107 views. A window is always backed by a surface. This service oversees
    108 lifecycles, input and focus events, screen orientation, transitions,
    109 animations, position, transforms, z-order, and many other aspects of a window.
    110 The Window Manager sends all of the window metadata to SurfaceFlinger so
    111 SurfaceFlinger can use that data to composite surfaces on the display.</p>
    112 
    113 <h3 id="hardware_composer">Hardware Composer</h3>
    114 
    115 <p>The hardware abstraction for the display subsystem. SurfaceFlinger can
    116 delegate certain composition work to the Hardware Composer to offload work from
    117 OpenGL and the GPU. SurfaceFlinger acts as just another OpenGL ES client. So
    118 when SurfaceFlinger is actively compositing one buffer or two into a third, for
    119 instance, it is using OpenGL ES. This makes compositing lower power than having
    120 the GPU conduct all computation.</p>
    121 
    122 <p>The Hardware Composer HAL conducts the other half of the work. This HAL is
    123 the central point for all Android graphics rendering. Hardware Composer must
    124 support events, one of which is VSYNC. Another is hotplug for plug-and-play
    125 HDMI support.</p>
    126 
    127 <p>See the
    128 <a href="{@docRoot}devices/graphics.html#hardware_composer_hal">Hardware
    129 Composer HAL</a> section for more information.</p>
    130 
    131 <h3 id="gralloc">Gralloc</h3>
    132 
    133 <p>The graphics memory allocator is needed to allocate memory that is requested
    134 by image producers. See the <a
    135 href="{@docRoot}devices/graphics.html#gralloc">Gralloc HAL</a> section for more
    136 information.</p>
    137 
    138 <h2 id="data_flow">Data flow</h2>
    139 
    140 <p>See the following diagram for a depiction of the Android graphics
    141 pipeline:</p>
    142 
    143 <img src="images/graphics_pipeline.png" alt="graphics data flow">
    144 
    145 <p class="img-caption"><strong>Figure 2.</strong> Graphic data flow through
    146 Android</p>
    147 
    148 <p>The objects on the left are renderers producing graphics buffers, such as
    149 the home screen, status bar, and system UI. SurfaceFlinger is the compositor
    150 and Hardware Composer is the composer.</p>
    151 
    152 <h3 id="bufferqueue">BufferQueue</h3>
    153 
    154 <p>BufferQueues provide the glue between the Android graphics components. These
    155 are a pair of queues that mediate the constant cycle of buffers from the
    156 producer to the consumer. Once the producers hand off their buffers,
    157 SurfaceFlinger is responsible for compositing everything onto the display.</p>
    158 
    159 <p>See the following diagram for the BufferQueue communication process.</p>
    160 
    161 <img src="images/bufferqueue.png"
    162 alt="BufferQueue communication process">
    163 
    164 <p class="img-caption"><strong>Figure 3.</strong> BufferQueue communication
    165 process</p>
    166 
    167 <p>BufferQueue contains the logic that ties image stream producers and image
    168 stream consumers together. Some examples of image producers are the camera
    169 previews produced by the camera HAL or OpenGL ES games. Some examples of image
    170 consumers are SurfaceFlinger or another app that displays an OpenGL ES stream,
    171 such as the camera app displaying the camera viewfinder.</p>
    172 
    173 <p>BufferQueue is a data structure that combines a buffer pool with a queue and
    174 uses Binder IPC to pass buffers between processes. The producer interface, or
    175 what you pass to somebody who wants to generate graphic buffers, is
    176 IGraphicBufferProducer (part of <a
    177 href="http://developer.android.com/reference/android/graphics/SurfaceTexture.html">SurfaceTexture</a>).
    178 BufferQueue is often used to render to a Surface and consume with a GL
    179 Consumer, among other tasks.
    180 
    181 BufferQueue can operate in three different modes:</p>
    182 
    183 <p><em>Synchronous-like mode</em> - BufferQueue by default operates in a
    184 synchronous-like mode, in which every buffer that comes in from the producer
    185 goes out at the consumer. No buffer is ever discarded in this mode. And if the
    186 producer is too fast and creates buffers faster than they are being drained, it
    187 will block and wait for free buffers.</p>
    188 
    189 <p><em>Non-blocking mode</em> - BufferQueue can also operate in a non-blocking
    190 mode where it generates an error rather than waiting for a buffer in those
    191 cases. No buffer is ever discarded in this mode either. This is useful for
    192 avoiding potential deadlocks in application software that may not understand
    193 the complex dependencies of the graphics framework.</p>
    194 
    195 <p><em>Discard mode</em> - Finally, BufferQueue may be configured to discard
    196 old buffers rather than generate errors or wait. For instance, if conducting GL
    197 rendering to a texture view and drawing as quickly as possible, buffers must be
    198 dropped.</p>
    199 
    200 <p>To conduct most of this work, SurfaceFlinger acts as just another OpenGL ES
    201 client. So when SurfaceFlinger is actively compositing one buffer or two into a
    202 third, for instance, it is using OpenGL ES.</p>
    203 
    204 <p>The Hardware Composer HAL conducts the other half of the work. This HAL acts
    205 as the central point for all Android graphics rendering.</p>
    206 
    207 <h3 id="synchronization_framework">Synchronization framework</h3>
    208 
    209 <p>Since Android graphics offer no explicit parallelism, vendors have long
    210 implemented their own implicit synchronization within their own drivers. This
    211 is no longer required with the Android graphics synchronization framework. See
    212 the <a href="#explicit_synchronization">Explicit synchronization</a> section
    213 for implementation instructions.</p>
    214 
    215 <p>The synchronization framework explicitly describes dependencies between
    216 different asynchronous operations in the system. The framework provides a
    217 simple API that lets components signal when buffers are released. It also
    218 allows synchronization primitives to be passed between drivers from the kernel
    219 to userspace and between userspace processes themselves.</p>
    220 
    221 <p>For example, an application may queue up work to be carried out in the GPU.
    222 The GPU then starts drawing that image. Although the image hasnt been drawn
    223 into memory yet, the buffer pointer can still be passed to the window
    224 compositor along with a fence that indicates when the GPU work will be
    225 finished. The window compositor may then start processing ahead of time and
    226 hand off the work to the display controller. In this manner, the CPU work can
    227 be done ahead of time. Once the GPU finishes, the display controller can
    228 immediately display the image.</p>
    229 
    230 <p>The synchronization framework also allows implementers to leverage
    231 synchronization resources in their own hardware components. Finally, the
    232 framework provides visibility into the graphics pipeline to aid in
    233 debugging.</p>
    234