Home | History | Annotate | Download | only in devices
      1 page.title=Graphics
      2 @jd:body
      3 
      4 <!--
      5     Copyright 2013 The Android Open Source Project
      6 
      7     Licensed under the Apache License, Version 2.0 (the "License");
      8     you may not use this file except in compliance with the License.
      9     You may obtain a copy of the License at
     10 
     11         http://www.apache.org/licenses/LICENSE-2.0
     12 
     13     Unless required by applicable law or agreed to in writing, software
     14     distributed under the License is distributed on an "AS IS" BASIS,
     15     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     16     See the License for the specific language governing permissions and
     17     limitations under the License.
     18 -->
     19 <div id="qv-wrapper">
     20   <div id="qv">
     21     <h2>In this document</h2>
     22     <ol id="auto-toc">
     23     </ol>
     24   </div>
     25 </div>
     26 
     27 <p>
     28   The Android framework has a variety of graphics rendering APIs for 2D and 3D that interact with
     29   your HAL implementations and graphics drivers, so it is important to have a good understanding of
     30   how they work at a higher level. There are two general ways that app developers can draw things
     31   to the screen: with Canvas or OpenGL.
     32 </p>
     33 <p>
     34   <a href="http://developer.android.com/reference/android/graphics/Canvas.html">android.graphics.Canvas</a>
     35   is a 2D graphics API and is the most widely used graphics API by
     36   developers. Canvas operations draw all the stock <a href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s
     37   and custom <a href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s in Android. Prior to Android 3.0, Canvas always
     38   used the non-hardware accelerated Skia 2D drawing library to draw.
     39 </p>
     40 <p>
     41   Introduced in Android 3.0, hardware acceleration for Canvas APIs uses a new drawing library
     42   called OpenGLRenderer that translates Canvas operations to OpenGL operations so that they can
     43   execute on the GPU. Developers had to opt-in to this feature previously, but beginning in Android
     44   4.0, hardware-accelerated Canvas is enabled by default. Consequently, a hardware GPU that
     45   supports OpenGL ES 2.0 is mandatory for Android 4.0 devices.
     46 </p>
     47 <p>
     48   Additionally, the <a href="https://developer.android.com/guide/topics/graphics/hardware-accel.html">Hardware Acceleration guide</a>
     49   explains how the hardware-accelerated drawing path works and identifies the differences in behavior from the software drawing path.
     50 </p>
     51 <p>
     52   The other main way that developers render graphics is by using OpenGL ES 1.x or 2.0 to directly
     53   render to a surface.  Android provides OpenGL ES interfaces in the
     54   <a href="http://developer.android.com/reference/android/opengl/package-summary.html">android.opengl</a> package
     55   that a developer can use to call into your GL implementation with the SDK or with native APIs
     56   provided in the Android NDK. 
     57 
     58   <p class="note"><strong>Note:</strong>A third option, Renderscript, was introduced in Android 3.0 to
     59   serve as a platform-agnostic graphics rendering API (it used OpenGL ES 2.0 under the hood), but
     60   will be deprecated starting in the Android 4.1 release.
     61 </p>
     62 <h2 id="render">
     63   How Android Renders Graphics
     64 </h2>
     65 <p>
     66   No matter what rendering API developers use, everything is rendered onto a buffer of pixel data
     67   called a "surface." Every window that is created on the Android platform is backed by a surface.
     68   All of the visible surfaces that are rendered to are composited onto the display
     69   by the SurfaceFlinger, Android's system service that manages composition of surfaces.
     70   Of course, there are more components that are involved in graphics rendering, and the
     71   main ones are described below:
     72 </p>
     73 
     74 <dl>
     75   <dt>
     76     <strong>Image Stream Producers</strong>
     77   </dt>
     78     <dd>Image stream producers can be things such as an OpenGL ES game, video buffers from the media server,
     79       a Canvas 2D application, or basically anything that produces graphic buffers for consumption.
     80     </dd>
     81 
     82   <dt>
     83     <strong>Image Stream Consumers</strong>
     84   </dt>
     85   <dd>The most common consumer of image streams is SurfaceFlinger, the system service that consumes
     86     the currently visible surfaces and composites them onto the display using
     87     information provided by the Window Manager. SurfaceFlinger is the only service that can
     88     modify the content of the display. SurfaceFlinger uses OpenGL and the
     89     hardware composer to compose a group of surfaces. Other OpenGL ES apps can consume image
     90     streams as well, such as the camera app consuming a camera preview image stream.
     91   </dd>
     92   <dt>
     93     <strong>SurfaceTexture</strong>
     94   </dt>
     95   <dd>SurfaceTexture contains the logic that ties image stream producers and image stream consumers together
     96     and is made of three parts: <code>SurfaceTextureClient</code>, <code>ISurfaceTexture</code>, and
     97     <code>SurfaceTexture</code> (in this case, <code>SurfaceTexture</code> is the actual C++ class and not
     98     the name of the overall component). These three parts facilitate the producer (<code>SurfaceTextureClient</code>),
     99     binder (<code>ISurfaceTexture</code>), and consumer (<code>SurfaceTexture</code>)
    100     components of SurfaceTexture in processes such as requesting memory from Gralloc,
    101     sharing memory across process boundaries, synchronizing access to buffers, and pairing the appropriate consumer with the producer.
    102     SurfaceTexture can operate in both asynchronous (producer never blocks waiting for consumer and drops frames) and
    103     synchronous (producer waits for consumer to process textures) modes. Some examples of image
    104     producers are the camera preview produced by the camera HAL or an OpenGL ES game. Some examples
    105     of image consumers are SurfaceFlinger or another app that wants to display an OpenGL ES stream
    106     such as the camera app displaying the camera viewfinder.
    107   </dd>
    108 
    109  <dt>
    110     <strong>Window Manager</strong>
    111   </dt>
    112   <dd>
    113     The Android system service that controls window lifecycles, input and focus events, screen
    114     orientation, transitions, animations, position, transforms, z-order, and many other aspects of
    115     a window (a container for views). A window is always backed by a surface. The Window Manager
    116     sends all of the window metadata to SurfaceFlinger, so SurfaceFlinger can use that data
    117     to figure out how to composite surfaces on the display.
    118   </dd>
    119   
    120   <dt>
    121     <strong>Hardware Composer</strong>
    122   </dt>
    123   <dd>
    124     The hardware abstraction for the display subsystem. SurfaceFlinger can delegate certain
    125     composition work to the hardware composer to offload work from the OpenGL and the GPU. This makes
    126     compositing faster than having SurfaceFlinger do all the work. Starting with Jellybean MR1,
    127     new versions of the hardware composer have been introduced. See the <code>hardware/libhardware/include/hardware/gralloc.h</code> <a href="#hwc">Hardware composer</a> section
    128     for more information.
    129   </dd>
    130 
    131     <dt>
    132     <strong>Gralloc</strong>
    133   </dt>
    134   <dd>Allocates memory for graphics buffers. See the  If you
    135     are using version 1.1 or later of the <a href="#hwc">hardware composer</a>, this HAL is no longer needed.</dd>
    136   
    137  
    138 </dl>
    139 <p>
    140   The following diagram shows how these components work together:
    141 </p><img src="images/graphics_surface.png">
    142 <p class="img-caption">
    143   <strong>Figure 1.</strong> How surfaces are rendered
    144 </p>
    145 
    146 </p>
    147 <h2 id="provide">
    148   What You Need to Provide
    149 </h2>
    150 <p>
    151  The following list and sections describe what you need to provide to support graphics in your product:
    152 </p>
    153 <ul>
    154   <li>OpenGL ES 1.x Driver
    155   </li>
    156   <li>OpenGL ES 2.0 Driver
    157   </li>
    158   <li>EGL Driver
    159   </li>
    160   <li>Gralloc HAL implementation
    161   </li>
    162   <li>Hardware Composer HAL implementation
    163   </li>
    164   <li>Framebuffer HAL implementation
    165   </li>
    166 </ul>
    167 <h3 id="gl">
    168   OpenGL and EGL drivers
    169 </h3>
    170 <p>
    171   You must provide drivers for OpenGL ES 1.x, OpenGL ES 2.0, and EGL. Some key things to keep in
    172   mind are:
    173 </p>
    174 <ul>
    175   <li>The GL driver needs to be robust and conformant to OpenGL ES standards.
    176   </li>
    177   <li>Do not limit the number of GL contexts. Because Android allows apps in the background and
    178   tries to keep GL contexts alive, you should not limit the number of contexts in your driver. It
    179   is not uncommon to have 20-30 active GL contexts at once, so you should also be careful with the
    180   amount of memory allocated for each context.
    181   </li>
    182   <li>Support the YV12 image format and any other YUV image formats that come from other
    183     components in the system such as media codecs or the camera.
    184   </li>
    185   <li>Support the mandatory extensions: <code>GL_OES_texture_external</code>,
    186   <code>EGL_ANDROID_image_native_buffer</code>, and <code>EGL_ANDROID_recordable</code>. We highly
    187   recommend supporting <code>EGL_ANDROID_blob_cache</code> and <code>EGL_KHR_fence_sync</code> as
    188   well.</li>
    189 </ul>
    190 
    191 <p>
    192   Note that the OpenGL API exposed to app developers is different from the OpenGL interface that
    193   you are implementing. Apps do not have access to the GL driver layer, and must go through the
    194   interface provided by the APIs.
    195 </p>
    196 <h4>
    197   Pre-rotation
    198 </h4>
    199 <p>Many times, hardware overlays do not support rotation, so the solution is to pre-transform the buffer before
    200   it reaches SurfaceFlinger. A query hint in ANativeWindow was added (<code>NATIVE_WINDOW_TRANSFORM_HINT</code>)
    201   that represents the most likely transform to be be applied to the buffer by SurfaceFlinger.
    202 
    203   Your GL driver can use this hint to pre-transform the buffer before it reaches SurfaceFlinger, so when the buffer
    204   actually reaches SurfaceFlinger, it is correctly transformed. See the ANativeWindow
    205   interface defined in <code>system/core/include/system/window.h</code> for more details. The following
    206   is some pseudo-code that implements this in the hardware composer:
    207 </p>
    208 
    209 <pre>
    210 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_WIDTH, &w);
    211 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_HEIGHT, &h);
    212 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_TRANSFORM_HINT, &hintTransform);
    213 if (hintTransform & HAL_TRANSFORM_ROT_90)
    214 swap(w, h);
    215 
    216 native_window_set_buffers_dimensions(anw, w, h);
    217 ANativeWindow->dequeueBuffer(...);
    218 
    219 // here GL driver renders content transformed by " hintTransform "
    220 
    221 int inverseTransform;
    222 inverseTransform = hintTransform;
    223 if (hintTransform & HAL_TRANSFORM_ROT_90)
    224    inverseTransform ^= HAL_TRANSFORM_ROT_180;
    225 
    226 native_window_set_buffers_transform(anw, inverseTransform);
    227 
    228 ANativeWindow->queueBuffer(...);
    229 </pre>
    230 
    231 <h3 id="gralloc">
    232   Gralloc HAL
    233 </h3>
    234 <p>
    235   The graphics memory allocator is needed to allocate memory that is requested by
    236   SurfaceTextureClient in image producers. You can find a stub implementation of the HAL at
    237   <code>hardware/libhardware/modules/gralloc.h</code>
    238 </p>
    239 <h4>
    240   Protected buffers
    241 </h4>
    242 <p>
    243   There is a gralloc usage flag <code>GRALLOC_USAGE_PROTECTED</code> that allows
    244   the graphics buffer to be displayed only through a hardware protected path.
    245 </p>
    246 <h3 id="hwc">
    247   Hardware Composer HAL
    248 </h3>
    249 <p>
    250   The hardware composer is used by SurfaceFlinger to composite surfaces to the screen. The hardware
    251   composer abstracts things like overlays and 2D blitters and helps offload some things that would
    252   normally be done with OpenGL. 
    253 </p>
    254 
    255 <p>Jellybean MR1 introduces a new version of the HAL. We recommend that you start using version 1.1 of the hardware
    256   composer HAL as it will provide support for the newest features (explicit synchronization, external displays, etc).
    257   Keep in mind that in addition to 1.1 version, there is also a 1.0 version of the HAL that we used for internal
    258   compatibility reasons and a 1.2 draft mode of the hardware composer HAL. We recommend that you implement
    259   version 1.1 until 1.2 is out of draft mode.
    260 </p>
    261 
    262  <p>Because the physical display hardware behind the hardware composer
    263   abstraction layer can vary from device to device, it is difficult to define recommended features, but
    264   here is some guidance:</p>
    265 
    266 <ul>
    267   <li>The hardware composer should support at least 4 overlays (status bar, system bar, application,
    268   and live wallpaper) for phones and 3 overlays for tablets (no status bar).</li>
    269   <li>Layers can be bigger than the screen, so the hardware composer should be able to handle layers
    270     that are larger than the display (For example, a wallpaper).</li>
    271   <li>Pre-multiplied per-pixel alpha blending and per-plane alpha blending should be supported at the same time.</li>
    272   <li>The hardware composer should be able to consume the same buffers that the GPU, camera, video decoder, and Skia buffers are producing,
    273     so supporting some of the following properties is helpful:
    274    <ul>
    275      <li>RGBA packing order</li>
    276      <li>YUV formats</li>
    277      <li>Tiling, swizzling, and stride properties</li>
    278    </ul>
    279   </li>
    280   <li>A hardware path for protected video playback must be present if you want to support protected content.</li>
    281 </ul>
    282 <p>
    283   The general recommendation when implementing your hardware composer is to implement a no-op
    284   hardware composer first. Once you have the structure done, implement a simple algorithm to
    285   delegate composition to the hardware composer. For example, just delegate the first three or four
    286   surfaces to the overlay hardware of the hardware composer. After that focus on common use cases,
    287   such as:
    288 </p>
    289 <ul>
    290   <li>Full-screen games in portrait and landscape mode
    291   </li>
    292   <li>Full-screen video with closed captioning and playback control
    293   </li>
    294   <li>The home screen (compositing the status bar, system bar, application window, and live
    295   wallpapers)
    296   </li>
    297   <li>Protected video playback
    298   </li>
    299   <li>Multiple display support
    300   </li>
    301 </ul>
    302 <p>
    303   After implementing the common use cases, you can focus on optimizations such as intelligently
    304   selecting the surfaces to send to the overlay hardware that maximizes the load taken off of the
    305   GPU. Another optimization is to detect whether the screen is updating. If not, delegate composition
    306   to OpenGL instead of the hardware composer to save power. When the screen updates again, contin`ue to
    307   offload composition to the hardware composer.
    308 </p>
    309 
    310 <p>
    311   You can find the HAL for the hardware composer in the
    312   <code>hardware/libhardware/include/hardware/hwcomposer.h</code> and <code>hardware/libhardware/include/hardware/hwcomposer_defs.h</code>
    313   files. A stub implementation is available in the <code>hardware/libhardware/modules/hwcomposer</code> directory.
    314 </p>
    315 
    316 <h4>
    317   VSYNC
    318 </h4>
    319 <p>
    320   VSYNC synchronizes certain events to the refresh cycle of the display. Applications always
    321   start drawing on a VSYNC boundary and SurfaceFlinger always composites on a VSYNC boundary.
    322   This eliminates stutters and improves visual performance of graphics.
    323   The hardware composer has a function pointer</p>
    324 
    325     <pre>int (waitForVsync*) (int64_t *timestamp)</pre>
    326 
    327   <p>that points to a function you must implement for VSYNC. This function blocks until
    328     a VSYNC happens and returns the timestamp of the actual VSYNC.
    329     A client can receive a VSYNC timestamps once, at specified intervals, or continously (interval of 1). 
    330     You must implement VSYNC to have no more than a 1ms lag at the maximum (1/2ms or less is recommended), and
    331     the timestamps returned must be extremely accurate.
    332 </p>
    333 
    334 <h4>Explicit synchronization</h4>
    335 <p>Explicit synchronization is required in Jellybean MR1 and later and provides a mechanism
    336 for Gralloc buffers to be acquired and released in a synchronized way.
    337 Explicit synchronization allows producers and consumers of graphics buffers to signal when
    338 they are done with a buffer. This allows the Android system to asynchronously queue buffers
    339 to be read or written with the certainty that another consumer or producer does not currently need them.</p>
    340 <p>
    341 This communication is facilitated with the use of synchronization fences, which are now required when requesting
    342 a buffer for consuming or producing. The
    343  synchronization framework consists of three main parts:</p>
    344 <ul>
    345   <li><code>sync_timeline</code>: a monotonically increasing timeline that should be implemented
    346     for each driver instance. This basically is a counter of jobs submitted to the kernel for a particular piece of hardware.</li>
    347     <li><code>sync_pt</code>: a single value or point on a <code>sync_timeline</code>. A point
    348       has three states: active, signaled, and error. Points start in the active state and transition
    349       to the signaled or error states. For instance, when a buffer is no longer needed by an image
    350       consumer, this <code>sync_point</code> is signaled so that image producers
    351       know that it is okay to write into the buffer again.</li>
    352     <li><code>sync_fence</code>: a collection of <code>sync_pt</code>s that often have different
    353       <code>sync_timeline</code> parents (such as for the display controller and GPU). This allows
    354       multiple consumers or producers to signal that
    355       they are using a buffer and to allow this information to be communicated with one function parameter.
    356       Fences are backed by a file descriptor and can be passed from kernel-space to user-space.
    357       For instance, a fence can contain two <code>sync_point</code>s that signify when two separate
    358       image consumers are done reading a buffer. When the fence is signaled,
    359       the image producers now know that both consumers are done consuming.</li>
    360     </ul>
    361 
    362 <p>To implement explicit synchronization, you need to do provide the following:
    363 
    364 <ul>
    365   <li>A kernel-space driver that implements a synchronization timeline for a particular piece of hardware. Drivers that
    366     need to be fence-aware are generally anything that accesses or communicates with the hardware composer.
    367     See the <code>system/core/include/sync/sync.h</code> file for more implementation details. The
    368     <code>system/core/libsync</code> directory includes a library to communicate with the kernel-space </li>
    369   <li>A hardware composer HAL module (version 1.1 or later) that supports the new synchronization functionality. You will need to provide
    370   the appropriate synchronization fences as parameters to the <code>set()</code> and <code>prepare()</code> functions in the HAL. As a last resort,
    371 you can pass in -1 for the file descriptor parameters if you cannot support explicit synchronization for some reason. This
    372 is not recommended, however.</li>
    373   <li>Two GL specific extensions related to fences, <code>EGL_ANDROID_native_fence_sync</code> and <code>EGL_ANDROID_wait_sync</code>,
    374     along with incorporating fence support into your graphics drivers.</ul>
    375 
    376 
    377 
    378