Home | History | Annotate | Download | only in devices
      1 page.title=Graphics
      2 @jd:body
      3 
      4 <!--
      5     Copyright 2010 The Android Open Source Project
      6 
      7     Licensed under the Apache License, Version 2.0 (the "License");
      8     you may not use this file except in compliance with the License.
      9     You may obtain a copy of the License at
     10 
     11         http://www.apache.org/licenses/LICENSE-2.0
     12 
     13     Unless required by applicable law or agreed to in writing, software
     14     distributed under the License is distributed on an "AS IS" BASIS,
     15     WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     16     See the License for the specific language governing permissions and
     17     limitations under the License.
     18 -->
     19 <div id="qv-wrapper">
     20   <div id="qv">
     21     <h2>In this document</h2>
     22     <ol id="auto-toc">
     23     </ol>
     24   </div>
     25 </div>
     26 
     27 <p>
     28   The Android framework has a variety of graphics rendering APIs for 2D and 3D that interact with
     29   your HAL implementations and graphics drivers, so it is important to have a good understanding of
     30   how they work at a higher level. There are two general ways that app developers can draw things
     31   to the screen: with Canvas or OpenGL.
     32 </p>
     33 <p>
     34   <a href="http://developer.android.com/reference/android/graphics/Canvas.html">android.graphics.Canvas</a>
     35   is a 2D graphics API and is the most widely used graphics API by
     36   developers. Canvas operations draw all the stock <a href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s
     37   and custom <a href="http://developer.android.com/reference/android/view/View.html">android.view.View</a>s in Android. Prior to Android 3.0, Canvas used the Skia 2D drawing library to
     38   draw, which could not take advantage of hardware acceleration.
     39 </p>
     40 <p>
     41   Introduced in Android 3.0, hardware acceleration for Canvas APIs uses a new drawing library
     42   called OpenGLRenderer that translates Canvas operations to OpenGL operations so that they can
     43   execute on the GPU. Developers had to opt-in to this feature previously, but beginning in Android
     44   4.0, hardware-accelerated Canvas is enabled by default. Consequently, a hardware GPU that
     45   supports OpenGL ES 2.0 is mandatory for Android 4.0 devices.
     46 </p>
     47 <p>
     48   The OpenGLRenderer does not interact with Skia, so we
     49   anticipate Skia to be slowly phased out without adverse effects to developers. Skia is currently
     50   deprecated and in maintenance mode but will be neccessary for a while because most apps published
     51   today still rely on non-hardware accelerated Canvas operations. In addition, not all Skia
     52   operations are supported by OpenGL, so some operations are still done in software with Skia, even
     53   with hardware acceleration turned on.
     54 </p>
     55 <p>
     56   The other main way that developers render graphics is by using OpenGL ES 1.x or 2.0 to directly
     57   render to a surface.  Android provides OpenGL ES interfaces in the
     58   <a href="http://developer.android.com/reference/android/opengl/package-summary.html">android.opengl</a> package
     59   that a developer can use to call into your GL implementation with the SDK or with native APIs
     60   provided in the Android NDK. 
     61 
     62   <p class="note"><strong>Note:</strong>A third option, Renderscript, was introduced in Android 3.0 to
     63   serve as a platform-agnostic graphics rendering API (it used OpenGL ES 2.0 under the hood), but
     64   will be deprecated starting in the Android 4.1 release.
     65 </p>
     66 <h2 id="render">
     67   How Android Renders Graphics
     68 </h2>
     69 <p>
     70   No matter what rendering API developers use, everything is rendered onto a buffer of pixel data
     71   called a "surface." Every window that is created on the Android platform is backed by a surface.
     72   All of the visible surfaces that are rendered to are composited onto the display
     73   by the SurfaceFlinger, Android's system service that manages composition of surfaces.
     74   Of course, there are more components that are involved in graphics rendering, and the
     75   main ones are described below:
     76 </p>
     77 
     78 <dl>
     79   <dt>
     80     <strong>Image Stream Producers</strong>
     81   </dt>
     82     <dd>Image stream producers can be things such as an OpenGL ES game, video buffers from the media server,
     83       a Canvas 2D application, or basically anything that produces graphic buffers for consumption.
     84     </dd>
     85 
     86   <dt>
     87     <strong>Image Stream Consumers</strong>
     88   </dt>
     89   <dd>The most common consumer of image streams is SurfaceFlinger, the system service that consumes
     90     the currently visible surfaces and composites them onto the display using
     91     information provided by the Window Manager. SurfaceFlinger is the only service that can
     92     modify the content of the display. SurfaceFlinger uses OpenGL and the
     93     hardware composer to compose a group of surfaces. Other OpenGL ES apps can consume image
     94     streams as well, such as the camera app consuming a camera preview image stream.
     95   </dd>
     96   <dt>
     97     <strong>SurfaceTexture</strong>
     98   </dt>
     99   <dd>SurfaceTexture contains the logic that ties image stream producers and image stream consumers together
    100     and is made of three parts: <code>SurfaceTextureClient</code>, <code>ISurfaceTexture</code>, and
    101     <code>SurfaceTexture</code> (in this case, <code>SurfaceTexture</code> is the actual C++ class and not
    102     the name of the overall component). These three parts facilitate the producer (<code>SurfaceTextureClient</code>),
    103     binder (<code>ISurfaceTexture</code>), and consumer (<code>SurfaceTexture</code>)
    104     components of SurfaceTexture in processes such as requesting memory from Gralloc,
    105     sharing memory across process boundaries, synchronizing access to buffers, and pairing the appropriate consumer with the producer.
    106     SurfaceTexture can operate in both asynchronous (producer never blocks waiting for consumer and drops frames) and
    107     synchronous (producer waits for consumer to process textures) modes. Some examples of image
    108     producers are the camera preview produced by the camera HAL or an OpenGL ES game. Some examples
    109     of image consumers are SurfaceFlinger or another app that wants to display an OpenGL ES stream
    110     such as the camera app displaying the camera viewfinder.
    111   </dd>
    112 
    113  <dt>
    114     <strong>Window Manager</strong>
    115   </dt>
    116   <dd>
    117     The Android system service that controls window lifecycles, input and focus events, screen
    118     orientation, transitions, animations, position, transforms, z-order, and many other aspects of
    119     a window (a container for views). A window is always backed by a surface. The Window Manager
    120     sends all of the window metadata to SurfaceFlinger, so SurfaceFlinger can use that data
    121     to figure out how to composite surfaces on the display.
    122   </dd>
    123   
    124   <dt>
    125     <strong>Hardware Composer</strong>
    126   </dt>
    127   <dd>
    128     The hardware abstraction for the display subsystem. SurfaceFlinger can delegate certain
    129     composition work to the hardware composer to offload work from the OpenGL and the GPU. This makes
    130     compositing faster than having SurfaceFlinger do all the work. Starting with Jellybean MR1,
    131     new versions of the hardware composer have been introduced. See the <code>hardware/libhardware/include/hardware/gralloc.h</code> <a href="#hwc">Hardware composer</a> section
    132     for more information.
    133   </dd>
    134 
    135     <dt>
    136     <strong>Gralloc</strong>
    137   </dt>
    138   <dd>Allocates memory for graphics buffers. See the  If you
    139     are using version 1.1 or later of the <a href="#hwc">hardware composer</a>, this HAL is no longer needed.</dd>
    140   
    141  
    142 </dl>
    143 <p>
    144   The following diagram shows how these components work together:
    145 </p><img src="images/graphics_surface.png">
    146 <p class="img-caption">
    147   <strong>Figure 1.</strong> How surfaces are rendered
    148 </p>
    149 
    150 </p>
    151 <h2 id="provide">
    152   What You Need to Provide
    153 </h2>
    154 <p>
    155  The following list and sections describe what you need to provide to support graphics in your product:
    156 </p>
    157 <ul>
    158   <li>OpenGL ES 1.x Driver
    159   </li>
    160   <li>OpenGL ES 2.0 Driver
    161   </li>
    162   <li>EGL Driver
    163   </li>
    164   <li>Gralloc HAL implementation
    165   </li>
    166   <li>Hardware Composer HAL implementation
    167   </li>
    168   <li>Framebuffer HAL implementation
    169   </li>
    170 </ul>
    171 <h3 id="gl">
    172   OpenGL and EGL drivers
    173 </h3>
    174 <p>
    175   You must provide drivers for OpenGL ES 1.x, OpenGL ES 2.0, and EGL. Some key things to keep in
    176   mind are:
    177 </p>
    178 <ul>
    179   <li>The GL driver needs to be robust and conformant to OpenGL ES standards.
    180   </li>
    181   <li>Do not limit the number of GL contexts. Because Android allows apps in the background and
    182   tries to keep GL contexts alive, you should not limit the number of contexts in your driver. It
    183   is not uncommon to have 20-30 active GL contexts at once, so you should also be careful with the
    184   amount of memory allocated for each context.
    185   </li>
    186   <li>Support the YV12 image format and any other YUV image formats that come from other
    187     components in the system such as media codecs or the camera.
    188   </li>
    189   <li>Support the mandatory extensions: <code>GL_OES_texture_external</code>,
    190   <code>EGL_ANDROID_image_native_buffer</code>, and <code>EGL_ANDROID_recordable</code>. We highly
    191   recommend supporting <code>EGL_ANDROID_blob_cache</code> and <code>EGL_KHR_fence_sync</code> as
    192   well.</li>
    193 </ul>
    194 
    195 <p>
    196   Note that the OpenGL API exposed to app developers is different from the OpenGL interface that
    197   you are implementing. Apps do not have access to the GL driver layer, and must go through the
    198   interface provided by the APIs.
    199 </p>
    200 <h4>
    201   Pre-rotation
    202 </h4>
    203 <p>Many times, hardware overlays do not support rotation, so the solution is to pre-transform the buffer before
    204   it reaches SurfaceFlinger. A query hint in ANativeWindow was added (<code>NATIVE_WINDOW_TRANSFORM_HINT</code>)
    205   that represents the most likely transform to be be applied to the buffer by SurfaceFlinger.
    206 
    207   Your GL driver can use this hint to pre-transform the buffer before it reaches SurfaceFlinger, so when the buffer
    208   actually reaches SurfaceFlinger, it is correctly transformed. See the ANativeWindow
    209   interface defined in <code>system/core/include/system/window.h</code> for more details. The following
    210   is some pseudo-code that implements this in the hardware composer:
    211 </p>
    212 
    213 <pre>
    214 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_WIDTH, &w);
    215 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_DEFAULT_HEIGHT, &h);
    216 ANativeWindow->query(ANativeWindow, NATIVE_WINDOW_TRANSFORM_HINT, &hintTransform);
    217 if (hintTransform & HAL_TRANSFORM_ROT_90)
    218 swap(w, h);
    219 
    220 native_window_set_buffers_dimensions(anw, w, h);
    221 ANativeWindow->dequeueBuffer(...);
    222 
    223 // here GL driver renders content transformed by " hintTransform "
    224 
    225 int inverseTransform;
    226 inverseTransform = hintTransform;
    227 if (hintTransform & HAL_TRANSFORM_ROT_90)
    228    inverseTransform ^= HAL_TRANSFORM_ROT_180;
    229 
    230 native_window_set_buffers_transform(anw, inverseTransform);
    231 
    232 ANativeWindow->queueBuffer(...);
    233 </pre>
    234 
    235 <h3 id="gralloc">
    236   Gralloc HAL
    237 </h3>
    238 <p>
    239   The graphics memory allocator is needed to allocate memory that is requested by
    240   SurfaceTextureClient in image producers. You can find a stub implementation of the HAL at
    241   <code>hardware/libhardware/modules/gralloc.h</code>
    242 </p>
    243 <h4>
    244   Protected buffers
    245 </h4>
    246 <p>
    247   There is a gralloc usage flag <code>GRALLOC_USAGE_PROTECTED</code> that allows
    248   the graphics buffer to be displayed only through a hardware protected path.
    249 </p>
    250 <h3 id="hwc">
    251   Hardware Composer HAL
    252 </h3>
    253 <p>
    254   The hardware composer is used by SurfaceFlinger to composite surfaces to the screen. The hardware
    255   composer abstracts things like overlays and 2D blitters and helps offload some things that would
    256   normally be done with OpenGL. 
    257 </p>
    258 
    259 <p>Jellybean MR1 introduces a new version of the HAL. We recommend that you start using version 1.1 of the hardware
    260   composer HAL as it will provide support for the newest features (explicit synchronization, external displays, etc).
    261   Keep in mind that in addition to 1.1 version, there is also a 1.0 version of the HAL that we used for internal
    262   compatibility reasons and a 1.2 draft mode of the hardware composer HAL. We recommend that you implement
    263   version 1.1 until 1.2 is out of draft mode.
    264 </p>
    265 
    266  <p>Because the physical display hardware behind the hardware composer
    267   abstraction layer can vary from device to device, it is difficult to define recommended features, but
    268   here is some guidance:</p>
    269 
    270 <ul>
    271   <li>The hardware composer should support at least 4 overlays (status bar, system bar, application,
    272   and live wallpaper) for phones and 3 overlays for tablets (no status bar).</li>
    273   <li>Layers can be bigger than the screen, so the hardware composer should be able to handle layers
    274     that are larger than the display (For example, a wallpaper).</li>
    275   <li>Pre-multiplied per-pixel alpha blending and per-plane alpha blending should be supported at the same time.</li>
    276   <li>The hardware composer should be able to consume the same buffers that the GPU, camera, video decoder, and Skia buffers are producing,
    277     so supporting some of the following properties is helpful:
    278    <ul>
    279      <li>RGBA packing order</li>
    280      <li>YUV formats</li>
    281      <li>Tiling, swizzling, and stride properties</li>
    282    </ul>
    283   </li>
    284   <li>A hardware path for protected video playback must be present if you want to support protected content.</li>
    285 </ul>
    286 <p>
    287   The general recommendation when implementing your hardware composer is to implement a no-op
    288   hardware composer first. Once you have the structure done, implement a simple algorithm to
    289   delegate composition to the hardware composer. For example, just delegate the first three or four
    290   surfaces to the overlay hardware of the hardware composer. After that focus on common use cases,
    291   such as:
    292 </p>
    293 <ul>
    294   <li>Full-screen games in portrait and landscape mode
    295   </li>
    296   <li>Full-screen video with closed captioning and playback control
    297   </li>
    298   <li>The home screen (compositing the status bar, system bar, application window, and live
    299   wallpapers)
    300   </li>
    301   <li>Protected video playback
    302   </li>
    303   <li>Multiple display support
    304   </li>
    305 </ul>
    306 <p>
    307   After implementing the common use cases, you can focus on optimizations such as intelligently
    308   selecting the surfaces to send to the overlay hardware that maximizes the load taken off of the
    309   GPU. Another optimization is to detect whether the screen is updating. If not, delegate composition
    310   to OpenGL instead of the hardware composer to save power. When the screen updates again, contin`ue to
    311   offload composition to the hardware composer.
    312 </p>
    313 
    314 <p>
    315   You can find the HAL for the hardware composer in the
    316   <code>hardware/libhardware/include/hardware/hwcomposer.h</code> and <code>hardware/libhardware/include/hardware/hwcomposer_defs.h</code>
    317   files. A stub implementation is available in the <code>hardware/libhardware/modules/hwcomposer</code> directory.
    318 </p>
    319 
    320 <h4>
    321   VSYNC
    322 </h4>
    323 <p>
    324   VSYNC synchronizes certain events to the refresh cycle of the display. Applications always
    325   start drawing on a VSYNC boundary and SurfaceFlinger always composites on a VSYNC boundary.
    326   This eliminates stutters and improves visual performance of graphics.
    327   The hardware composer has a function pointer</p>
    328 
    329     <pre>int (waitForVsync*) (int64_t *timestamp)</pre>
    330 
    331   <p>that points to a function you must implement for VSYNC. This function blocks until
    332     a VSYNC happens and returns the timestamp of the actual VSYNC.
    333     A client can receive a VSYNC timestamps once, at specified intervals, or continously (interval of 1). 
    334     You must implement VSYNC to have no more than a 1ms lag at the maximum (1/2ms or less is recommended), and
    335     the timestamps returned must be extremely accurate.
    336 </p>
    337 
    338 <h4>Explicit synchronization</h4>
    339 <p>Explicit synchronization is required in Jellybean MR1 and later and provides a mechanism
    340 for Gralloc buffers to be acquired and released in a synchronized way.
    341 Explicit synchronization allows producers and consumers of graphics buffers to signal when
    342 they are done with a buffer. This allows the Android system to asynchronously queue buffers
    343 to be read or written with the certainty that another consumer or producer does not currently need them.</p>
    344 <p>
    345 This communication is facilitated with the use of synchronization fences, which are now required when requesting
    346 a buffer for consuming or producing. The
    347  synchronization framework consists of three main parts:</p>
    348 <ul>
    349   <li><code>sync_timeline</code>: a monotonically increasing timeline that should be implemented
    350     for each driver instance. This basically is a counter of jobs submitted to the kernel for a particular piece of hardware.</li>
    351     <li><code>sync_pt</code>: a single value or point on a <code>sync_timeline</code>. A point
    352       has three states: active, signaled, and error. Points start in the active state and transition
    353       to the signaled or error states. For instance, when a buffer is no longer needed by an image
    354       consumer, this <code>sync_point</code> is signaled so that image producers
    355       know that it is okay to write into the buffer again.</li>
    356     <li><code>sync_fence</code>: a collection of <code>sync_pt</code>s that often have different
    357       <code>sync_timeline</code> parents (such as for the display controller and GPU). This allows
    358       multiple consumers or producers to signal that
    359       they are using a buffer and to allow this information to be communicated with one function parameter.
    360       Fences are backed by a file descriptor and can be passed from kernel-space to user-space.
    361       For instance, a fence can contain two <code>sync_point</code>s that signify when two separate
    362       image consumers are done reading a buffer. When the fence is signaled,
    363       the image producers now know that both consumers are done consuming.</li>
    364     </ul>
    365 
    366 <p>To implement explicit synchronization, you need to do provide the following:
    367 
    368 <ul>
    369   <li>A kernel-space driver that implements a synchronization timeline for a particular piece of hardware. Drivers that
    370     need to be fence-aware are generally anything that accesses or communicates with the hardware composer.
    371     See the <code>system/core/include/sync/sync.h</code> file for more implementation details. The
    372     <code>system/core/libsync</code> directory includes a library to communicate with the kernel-space </li>
    373   <li>A hardware composer HAL module (version 1.1 or later) that supports the new synchronization functionality. You will need to provide
    374   the appropriate synchronization fences as parameters to the <code>set()</code> and <code>prepare()</code> functions in the HAL. As a last resort,
    375 you can pass in -1 for the file descriptor parameters if you cannot support explicit synchronization for some reason. This
    376 is not recommended, however.</li>
    377   <li>Two GL specific extensions related to fences, <code>EGL_ANDROID_native_fence_sync</code> and <code>EGL_ANDROID_wait_sync</code>,
    378     along with incorporating fence support into your graphics drivers.</ul>
    379 
    380 
    381 
    382