Home | History | Annotate | Download | only in releasetools
      1 #!/usr/bin/env python
      2 #
      3 # Copyright (C) 2008 The Android Open Source Project
      4 #
      5 # Licensed under the Apache License, Version 2.0 (the "License");
      6 # you may not use this file except in compliance with the License.
      7 # You may obtain a copy of the License at
      8 #
      9 #      http://www.apache.org/licenses/LICENSE-2.0
     10 #
     11 # Unless required by applicable law or agreed to in writing, software
     12 # distributed under the License is distributed on an "AS IS" BASIS,
     13 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     14 # See the License for the specific language governing permissions and
     15 # limitations under the License.
     16 
     17 """
     18 Given a target-files zipfile, produces an OTA package that installs that build.
     19 An incremental OTA is produced if -i is given, otherwise a full OTA is produced.
     20 
     21 Usage:  ota_from_target_files [options] input_target_files output_ota_package
     22 
     23 Common options that apply to both of non-A/B and A/B OTAs
     24 
     25   --downgrade
     26       Intentionally generate an incremental OTA that updates from a newer build
     27       to an older one (e.g. downgrading from P preview back to O MR1).
     28       "ota-downgrade=yes" will be set in the package metadata file. A data wipe
     29       will always be enforced when using this flag, so "ota-wipe=yes" will also
     30       be included in the metadata file. The update-binary in the source build
     31       will be used in the OTA package, unless --binary flag is specified. Please
     32       also check the comment for --override_timestamp below.
     33 
     34   -i  (--incremental_from) <file>
     35       Generate an incremental OTA using the given target-files zip as the
     36       starting build.
     37 
     38   -k  (--package_key) <key>
     39       Key to use to sign the package (default is the value of
     40       default_system_dev_certificate from the input target-files's
     41       META/misc_info.txt, or "build/target/product/security/testkey" if that
     42       value is not specified).
     43 
     44       For incremental OTAs, the default value is based on the source
     45       target-file, not the target build.
     46 
     47   --override_timestamp
     48       Intentionally generate an incremental OTA that updates from a newer build
     49       to an older one (based on timestamp comparison), by setting the downgrade
     50       flag in the package metadata. This differs from --downgrade flag, as we
     51       don't enforce a data wipe with this flag. Because we know for sure this is
     52       NOT an actual downgrade case, but two builds happen to be cut in a reverse
     53       order (e.g. from two branches). A legit use case is that we cut a new
     54       build C (after having A and B), but want to enfore an update path of A ->
     55       C -> B. Specifying --downgrade may not help since that would enforce a
     56       data wipe for C -> B update.
     57 
     58       We used to set a fake timestamp in the package metadata for this flow. But
     59       now we consolidate the two cases (i.e. an actual downgrade, or a downgrade
     60       based on timestamp) with the same "ota-downgrade=yes" flag, with the
     61       difference being whether "ota-wipe=yes" is set.
     62 
     63   --wipe_user_data
     64       Generate an OTA package that will wipe the user data partition when
     65       installed.
     66 
     67   --retrofit_dynamic_partitions
     68       Generates an OTA package that updates a device to support dynamic
     69       partitions (default False). This flag is implied when generating
     70       an incremental OTA where the base build does not support dynamic
     71       partitions but the target build does. For A/B, when this flag is set,
     72       --skip_postinstall is implied.
     73 
     74   --skip_compatibility_check
     75       Skip adding the compatibility package to the generated OTA package.
     76 
     77   --output_metadata_path
     78       Write a copy of the metadata to a separate file. Therefore, users can
     79       read the post build fingerprint without extracting the OTA package.
     80 
     81 Non-A/B OTA specific options
     82 
     83   -b  (--binary) <file>
     84       Use the given binary as the update-binary in the output package, instead
     85       of the binary in the build's target_files. Use for development only.
     86 
     87   --block
     88       Generate a block-based OTA for non-A/B device. We have deprecated the
     89       support for file-based OTA since O. Block-based OTA will be used by
     90       default for all non-A/B devices. Keeping this flag here to not break
     91       existing callers.
     92 
     93   -e  (--extra_script) <file>
     94       Insert the contents of file at the end of the update script.
     95 
     96   --full_bootloader
     97       Similar to --full_radio. When generating an incremental OTA, always
     98       include a full copy of bootloader image.
     99 
    100   --full_radio
    101       When generating an incremental OTA, always include a full copy of radio
    102       image. This option is only meaningful when -i is specified, because a full
    103       radio is always included in a full OTA if applicable.
    104 
    105   --log_diff <file>
    106       Generate a log file that shows the differences in the source and target
    107       builds for an incremental package. This option is only meaningful when -i
    108       is specified.
    109 
    110   -o  (--oem_settings) <main_file[,additional_files...]>
    111       Comma seperated list of files used to specify the expected OEM-specific
    112       properties on the OEM partition of the intended device. Multiple expected
    113       values can be used by providing multiple files. Only the first dict will
    114       be used to compute fingerprint, while the rest will be used to assert
    115       OEM-specific properties.
    116 
    117   --oem_no_mount
    118       For devices with OEM-specific properties but without an OEM partition, do
    119       not mount the OEM partition in the updater-script. This should be very
    120       rarely used, since it's expected to have a dedicated OEM partition for
    121       OEM-specific properties. Only meaningful when -o is specified.
    122 
    123   --stash_threshold <float>
    124       Specify the threshold that will be used to compute the maximum allowed
    125       stash size (defaults to 0.8).
    126 
    127   -t  (--worker_threads) <int>
    128       Specify the number of worker-threads that will be used when generating
    129       patches for incremental updates (defaults to 3).
    130 
    131   --verify
    132       Verify the checksums of the updated system and vendor (if any) partitions.
    133       Non-A/B incremental OTAs only.
    134 
    135   -2  (--two_step)
    136       Generate a 'two-step' OTA package, where recovery is updated first, so
    137       that any changes made to the system partition are done using the new
    138       recovery (new kernel, etc.).
    139 
    140 A/B OTA specific options
    141 
    142   --include_secondary
    143       Additionally include the payload for secondary slot images (default:
    144       False). Only meaningful when generating A/B OTAs.
    145 
    146       By default, an A/B OTA package doesn't contain the images for the
    147       secondary slot (e.g. system_other.img). Specifying this flag allows
    148       generating a separate payload that will install secondary slot images.
    149 
    150       Such a package needs to be applied in a two-stage manner, with a reboot
    151       in-between. During the first stage, the updater applies the primary
    152       payload only. Upon finishing, it reboots the device into the newly updated
    153       slot. It then continues to install the secondary payload to the inactive
    154       slot, but without switching the active slot at the end (needs the matching
    155       support in update_engine, i.e. SWITCH_SLOT_ON_REBOOT flag).
    156 
    157       Due to the special install procedure, the secondary payload will be always
    158       generated as a full payload.
    159 
    160   --payload_signer <signer>
    161       Specify the signer when signing the payload and metadata for A/B OTAs.
    162       By default (i.e. without this flag), it calls 'openssl pkeyutl' to sign
    163       with the package private key. If the private key cannot be accessed
    164       directly, a payload signer that knows how to do that should be specified.
    165       The signer will be supplied with "-inkey <path_to_key>",
    166       "-in <input_file>" and "-out <output_file>" parameters.
    167 
    168   --payload_signer_args <args>
    169       Specify the arguments needed for payload signer.
    170 
    171   --payload_signer_key_size <key_size>
    172       Specify the key size in bytes of the payload signer.
    173 
    174   --skip_postinstall
    175       Skip the postinstall hooks when generating an A/B OTA package (default:
    176       False). Note that this discards ALL the hooks, including non-optional
    177       ones. Should only be used if caller knows it's safe to do so (e.g. all the
    178       postinstall work is to dexopt apps and a data wipe will happen immediately
    179       after). Only meaningful when generating A/B OTAs.
    180 """
    181 
    182 from __future__ import print_function
    183 
    184 import logging
    185 import multiprocessing
    186 import os.path
    187 import shlex
    188 import shutil
    189 import struct
    190 import sys
    191 import tempfile
    192 import zipfile
    193 
    194 import common
    195 import edify_generator
    196 import verity_utils
    197 
    198 if sys.hexversion < 0x02070000:
    199   print("Python 2.7 or newer is required.", file=sys.stderr)
    200   sys.exit(1)
    201 
    202 logger = logging.getLogger(__name__)
    203 
    204 OPTIONS = common.OPTIONS
    205 OPTIONS.package_key = None
    206 OPTIONS.incremental_source = None
    207 OPTIONS.verify = False
    208 OPTIONS.patch_threshold = 0.95
    209 OPTIONS.wipe_user_data = False
    210 OPTIONS.downgrade = False
    211 OPTIONS.extra_script = None
    212 OPTIONS.worker_threads = multiprocessing.cpu_count() // 2
    213 if OPTIONS.worker_threads == 0:
    214   OPTIONS.worker_threads = 1
    215 OPTIONS.two_step = False
    216 OPTIONS.include_secondary = False
    217 OPTIONS.no_signing = False
    218 OPTIONS.block_based = True
    219 OPTIONS.updater_binary = None
    220 OPTIONS.oem_source = None
    221 OPTIONS.oem_no_mount = False
    222 OPTIONS.full_radio = False
    223 OPTIONS.full_bootloader = False
    224 # Stash size cannot exceed cache_size * threshold.
    225 OPTIONS.cache_size = None
    226 OPTIONS.stash_threshold = 0.8
    227 OPTIONS.log_diff = None
    228 OPTIONS.payload_signer = None
    229 OPTIONS.payload_signer_args = []
    230 OPTIONS.payload_signer_key_size = None
    231 OPTIONS.extracted_input = None
    232 OPTIONS.key_passwords = []
    233 OPTIONS.skip_postinstall = False
    234 OPTIONS.retrofit_dynamic_partitions = False
    235 OPTIONS.skip_compatibility_check = False
    236 OPTIONS.output_metadata_path = None
    237 
    238 
    239 METADATA_NAME = 'META-INF/com/android/metadata'
    240 POSTINSTALL_CONFIG = 'META/postinstall_config.txt'
    241 DYNAMIC_PARTITION_INFO = 'META/dynamic_partitions_info.txt'
    242 AB_PARTITIONS = 'META/ab_partitions.txt'
    243 UNZIP_PATTERN = ['IMAGES/*', 'META/*', 'RADIO/*']
    244 RETROFIT_DAP_UNZIP_PATTERN = ['OTA/super_*.img', AB_PARTITIONS]
    245 
    246 
    247 class BuildInfo(object):
    248   """A class that holds the information for a given build.
    249 
    250   This class wraps up the property querying for a given source or target build.
    251   It abstracts away the logic of handling OEM-specific properties, and caches
    252   the commonly used properties such as fingerprint.
    253 
    254   There are two types of info dicts: a) build-time info dict, which is generated
    255   at build time (i.e. included in a target_files zip); b) OEM info dict that is
    256   specified at package generation time (via command line argument
    257   '--oem_settings'). If a build doesn't use OEM-specific properties (i.e. not
    258   having "oem_fingerprint_properties" in build-time info dict), all the queries
    259   would be answered based on build-time info dict only. Otherwise if using
    260   OEM-specific properties, some of them will be calculated from two info dicts.
    261 
    262   Users can query properties similarly as using a dict() (e.g. info['fstab']),
    263   or to query build properties via GetBuildProp() or GetVendorBuildProp().
    264 
    265   Attributes:
    266     info_dict: The build-time info dict.
    267     is_ab: Whether it's a build that uses A/B OTA.
    268     oem_dicts: A list of OEM dicts.
    269     oem_props: A list of OEM properties that should be read from OEM dicts; None
    270         if the build doesn't use any OEM-specific property.
    271     fingerprint: The fingerprint of the build, which would be calculated based
    272         on OEM properties if applicable.
    273     device: The device name, which could come from OEM dicts if applicable.
    274   """
    275 
    276   _RO_PRODUCT_RESOLVE_PROPS = ["ro.product.brand", "ro.product.device",
    277                                "ro.product.manufacturer", "ro.product.model",
    278                                "ro.product.name"]
    279   _RO_PRODUCT_PROPS_DEFAULT_SOURCE_ORDER = ["product", "product_services",
    280                                             "odm", "vendor", "system"]
    281 
    282   def __init__(self, info_dict, oem_dicts):
    283     """Initializes a BuildInfo instance with the given dicts.
    284 
    285     Note that it only wraps up the given dicts, without making copies.
    286 
    287     Arguments:
    288       info_dict: The build-time info dict.
    289       oem_dicts: A list of OEM dicts (which is parsed from --oem_settings). Note
    290           that it always uses the first dict to calculate the fingerprint or the
    291           device name. The rest would be used for asserting OEM properties only
    292           (e.g. one package can be installed on one of these devices).
    293     """
    294     self.info_dict = info_dict
    295     self.oem_dicts = oem_dicts
    296 
    297     self._is_ab = info_dict.get("ab_update") == "true"
    298     self._oem_props = info_dict.get("oem_fingerprint_properties")
    299 
    300     if self._oem_props:
    301       assert oem_dicts, "OEM source required for this build"
    302 
    303     # These two should be computed only after setting self._oem_props.
    304     self._device = self.GetOemProperty("ro.product.device")
    305     self._fingerprint = self.CalculateFingerprint()
    306 
    307   @property
    308   def is_ab(self):
    309     return self._is_ab
    310 
    311   @property
    312   def device(self):
    313     return self._device
    314 
    315   @property
    316   def fingerprint(self):
    317     return self._fingerprint
    318 
    319   @property
    320   def vendor_fingerprint(self):
    321     return self._fingerprint_of("vendor")
    322 
    323   @property
    324   def product_fingerprint(self):
    325     return self._fingerprint_of("product")
    326 
    327   @property
    328   def odm_fingerprint(self):
    329     return self._fingerprint_of("odm")
    330 
    331   def _fingerprint_of(self, partition):
    332     if partition + ".build.prop" not in self.info_dict:
    333       return None
    334     build_prop = self.info_dict[partition + ".build.prop"]
    335     if "ro." + partition + ".build.fingerprint" in build_prop:
    336       return build_prop["ro." + partition + ".build.fingerprint"]
    337     if "ro." + partition + ".build.thumbprint" in build_prop:
    338       return build_prop["ro." + partition + ".build.thumbprint"]
    339     return None
    340 
    341   @property
    342   def oem_props(self):
    343     return self._oem_props
    344 
    345   def __getitem__(self, key):
    346     return self.info_dict[key]
    347 
    348   def __setitem__(self, key, value):
    349     self.info_dict[key] = value
    350 
    351   def get(self, key, default=None):
    352     return self.info_dict.get(key, default)
    353 
    354   def items(self):
    355     return self.info_dict.items()
    356 
    357   def GetBuildProp(self, prop):
    358     """Returns the inquired build property."""
    359     if prop in BuildInfo._RO_PRODUCT_RESOLVE_PROPS:
    360       return self._ResolveRoProductBuildProp(prop)
    361 
    362     try:
    363       return self.info_dict.get("build.prop", {})[prop]
    364     except KeyError:
    365       raise common.ExternalError("couldn't find %s in build.prop" % (prop,))
    366 
    367   def _ResolveRoProductBuildProp(self, prop):
    368     """Resolves the inquired ro.product.* build property"""
    369     prop_val = self.info_dict.get("build.prop", {}).get(prop)
    370     if prop_val:
    371       return prop_val
    372 
    373     source_order_val = self.info_dict.get("build.prop", {}).get(
    374       "ro.product.property_source_order")
    375     if source_order_val:
    376       source_order = source_order_val.split(",")
    377     else:
    378       source_order = BuildInfo._RO_PRODUCT_PROPS_DEFAULT_SOURCE_ORDER
    379 
    380     # Check that all sources in ro.product.property_source_order are valid
    381     if any([x not in BuildInfo._RO_PRODUCT_PROPS_DEFAULT_SOURCE_ORDER
    382             for x in source_order]):
    383       raise common.ExternalError(
    384         "Invalid ro.product.property_source_order '{}'".format(source_order))
    385 
    386     for source in source_order:
    387       source_prop = prop.replace("ro.product", "ro.product.{}".format(source),
    388                                  1)
    389       prop_val = self.info_dict.get("{}.build.prop".format(source), {}).get(
    390         source_prop)
    391       if prop_val:
    392         return prop_val
    393 
    394     raise common.ExternalError("couldn't resolve {}".format(prop))
    395 
    396   def GetVendorBuildProp(self, prop):
    397     """Returns the inquired vendor build property."""
    398     try:
    399       return self.info_dict.get("vendor.build.prop", {})[prop]
    400     except KeyError:
    401       raise common.ExternalError(
    402           "couldn't find %s in vendor.build.prop" % (prop,))
    403 
    404   def GetOemProperty(self, key):
    405     if self.oem_props is not None and key in self.oem_props:
    406       return self.oem_dicts[0][key]
    407     return self.GetBuildProp(key)
    408 
    409   def CalculateFingerprint(self):
    410     if self.oem_props is None:
    411       try:
    412         return self.GetBuildProp("ro.build.fingerprint")
    413       except common.ExternalError:
    414         return "{}/{}/{}:{}/{}/{}:{}/{}".format(
    415           self.GetBuildProp("ro.product.brand"),
    416           self.GetBuildProp("ro.product.name"),
    417           self.GetBuildProp("ro.product.device"),
    418           self.GetBuildProp("ro.build.version.release"),
    419           self.GetBuildProp("ro.build.id"),
    420           self.GetBuildProp("ro.build.version.incremental"),
    421           self.GetBuildProp("ro.build.type"),
    422           self.GetBuildProp("ro.build.tags"))
    423     return "%s/%s/%s:%s" % (
    424         self.GetOemProperty("ro.product.brand"),
    425         self.GetOemProperty("ro.product.name"),
    426         self.GetOemProperty("ro.product.device"),
    427         self.GetBuildProp("ro.build.thumbprint"))
    428 
    429   def WriteMountOemScript(self, script):
    430     assert self.oem_props is not None
    431     recovery_mount_options = self.info_dict.get("recovery_mount_options")
    432     script.Mount("/oem", recovery_mount_options)
    433 
    434   def WriteDeviceAssertions(self, script, oem_no_mount):
    435     # Read the property directly if not using OEM properties.
    436     if not self.oem_props:
    437       script.AssertDevice(self.device)
    438       return
    439 
    440     # Otherwise assert OEM properties.
    441     if not self.oem_dicts:
    442       raise common.ExternalError(
    443           "No OEM file provided to answer expected assertions")
    444 
    445     for prop in self.oem_props.split():
    446       values = []
    447       for oem_dict in self.oem_dicts:
    448         if prop in oem_dict:
    449           values.append(oem_dict[prop])
    450       if not values:
    451         raise common.ExternalError(
    452             "The OEM file is missing the property %s" % (prop,))
    453       script.AssertOemProperty(prop, values, oem_no_mount)
    454 
    455 
    456 class PayloadSigner(object):
    457   """A class that wraps the payload signing works.
    458 
    459   When generating a Payload, hashes of the payload and metadata files will be
    460   signed with the device key, either by calling an external payload signer or
    461   by calling openssl with the package key. This class provides a unified
    462   interface, so that callers can just call PayloadSigner.Sign().
    463 
    464   If an external payload signer has been specified (OPTIONS.payload_signer), it
    465   calls the signer with the provided args (OPTIONS.payload_signer_args). Note
    466   that the signing key should be provided as part of the payload_signer_args.
    467   Otherwise without an external signer, it uses the package key
    468   (OPTIONS.package_key) and calls openssl for the signing works.
    469   """
    470 
    471   def __init__(self):
    472     if OPTIONS.payload_signer is None:
    473       # Prepare the payload signing key.
    474       private_key = OPTIONS.package_key + OPTIONS.private_key_suffix
    475       pw = OPTIONS.key_passwords[OPTIONS.package_key]
    476 
    477       cmd = ["openssl", "pkcs8", "-in", private_key, "-inform", "DER"]
    478       cmd.extend(["-passin", "pass:" + pw] if pw else ["-nocrypt"])
    479       signing_key = common.MakeTempFile(prefix="key-", suffix=".key")
    480       cmd.extend(["-out", signing_key])
    481       common.RunAndCheckOutput(cmd, verbose=False)
    482 
    483       self.signer = "openssl"
    484       self.signer_args = ["pkeyutl", "-sign", "-inkey", signing_key,
    485                           "-pkeyopt", "digest:sha256"]
    486       self.key_size = self._GetKeySizeInBytes(signing_key)
    487     else:
    488       self.signer = OPTIONS.payload_signer
    489       self.signer_args = OPTIONS.payload_signer_args
    490       if OPTIONS.payload_signer_key_size:
    491         self.key_size = int(OPTIONS.payload_signer_key_size)
    492         assert self.key_size == 256 or self.key_size == 512, \
    493             "Unsupported key size {}".format(OPTIONS.payload_signer_key_size)
    494       else:
    495         self.key_size = 256
    496 
    497   @staticmethod
    498   def _GetKeySizeInBytes(signing_key):
    499     modulus_file = common.MakeTempFile(prefix="modulus-")
    500     cmd = ["openssl", "rsa", "-inform", "PEM", "-in", signing_key, "-modulus",
    501            "-noout", "-out", modulus_file]
    502     common.RunAndCheckOutput(cmd, verbose=False)
    503 
    504     with open(modulus_file) as f:
    505       modulus_string = f.read()
    506     # The modulus string has the format "Modulus=$data", where $data is the
    507     # concatenation of hex dump of the modulus.
    508     MODULUS_PREFIX = "Modulus="
    509     assert modulus_string.startswith(MODULUS_PREFIX)
    510     modulus_string = modulus_string[len(MODULUS_PREFIX):]
    511     key_size = len(modulus_string) / 2
    512     assert key_size == 256 or key_size == 512, \
    513         "Unsupported key size {}".format(key_size)
    514     return key_size
    515 
    516   def Sign(self, in_file):
    517     """Signs the given input file. Returns the output filename."""
    518     out_file = common.MakeTempFile(prefix="signed-", suffix=".bin")
    519     cmd = [self.signer] + self.signer_args + ['-in', in_file, '-out', out_file]
    520     common.RunAndCheckOutput(cmd)
    521     return out_file
    522 
    523 
    524 class Payload(object):
    525   """Manages the creation and the signing of an A/B OTA Payload."""
    526 
    527   PAYLOAD_BIN = 'payload.bin'
    528   PAYLOAD_PROPERTIES_TXT = 'payload_properties.txt'
    529   SECONDARY_PAYLOAD_BIN = 'secondary/payload.bin'
    530   SECONDARY_PAYLOAD_PROPERTIES_TXT = 'secondary/payload_properties.txt'
    531 
    532   def __init__(self, secondary=False):
    533     """Initializes a Payload instance.
    534 
    535     Args:
    536       secondary: Whether it's generating a secondary payload (default: False).
    537     """
    538     self.payload_file = None
    539     self.payload_properties = None
    540     self.secondary = secondary
    541 
    542   def Generate(self, target_file, source_file=None, additional_args=None):
    543     """Generates a payload from the given target-files zip(s).
    544 
    545     Args:
    546       target_file: The filename of the target build target-files zip.
    547       source_file: The filename of the source build target-files zip; or None if
    548           generating a full OTA.
    549       additional_args: A list of additional args that should be passed to
    550           brillo_update_payload script; or None.
    551     """
    552     if additional_args is None:
    553       additional_args = []
    554 
    555     payload_file = common.MakeTempFile(prefix="payload-", suffix=".bin")
    556     cmd = ["brillo_update_payload", "generate",
    557            "--payload", payload_file,
    558            "--target_image", target_file]
    559     if source_file is not None:
    560       cmd.extend(["--source_image", source_file])
    561     cmd.extend(additional_args)
    562     common.RunAndCheckOutput(cmd)
    563 
    564     self.payload_file = payload_file
    565     self.payload_properties = None
    566 
    567   def Sign(self, payload_signer):
    568     """Generates and signs the hashes of the payload and metadata.
    569 
    570     Args:
    571       payload_signer: A PayloadSigner() instance that serves the signing work.
    572 
    573     Raises:
    574       AssertionError: On any failure when calling brillo_update_payload script.
    575     """
    576     assert isinstance(payload_signer, PayloadSigner)
    577 
    578     # 1. Generate hashes of the payload and metadata files.
    579     payload_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
    580     metadata_sig_file = common.MakeTempFile(prefix="sig-", suffix=".bin")
    581     cmd = ["brillo_update_payload", "hash",
    582            "--unsigned_payload", self.payload_file,
    583            "--signature_size", str(payload_signer.key_size),
    584            "--metadata_hash_file", metadata_sig_file,
    585            "--payload_hash_file", payload_sig_file]
    586     common.RunAndCheckOutput(cmd)
    587 
    588     # 2. Sign the hashes.
    589     signed_payload_sig_file = payload_signer.Sign(payload_sig_file)
    590     signed_metadata_sig_file = payload_signer.Sign(metadata_sig_file)
    591 
    592     # 3. Insert the signatures back into the payload file.
    593     signed_payload_file = common.MakeTempFile(prefix="signed-payload-",
    594                                               suffix=".bin")
    595     cmd = ["brillo_update_payload", "sign",
    596            "--unsigned_payload", self.payload_file,
    597            "--payload", signed_payload_file,
    598            "--signature_size", str(payload_signer.key_size),
    599            "--metadata_signature_file", signed_metadata_sig_file,
    600            "--payload_signature_file", signed_payload_sig_file]
    601     common.RunAndCheckOutput(cmd)
    602 
    603     # 4. Dump the signed payload properties.
    604     properties_file = common.MakeTempFile(prefix="payload-properties-",
    605                                           suffix=".txt")
    606     cmd = ["brillo_update_payload", "properties",
    607            "--payload", signed_payload_file,
    608            "--properties_file", properties_file]
    609     common.RunAndCheckOutput(cmd)
    610 
    611     if self.secondary:
    612       with open(properties_file, "a") as f:
    613         f.write("SWITCH_SLOT_ON_REBOOT=0\n")
    614 
    615     if OPTIONS.wipe_user_data:
    616       with open(properties_file, "a") as f:
    617         f.write("POWERWASH=1\n")
    618 
    619     self.payload_file = signed_payload_file
    620     self.payload_properties = properties_file
    621 
    622   def WriteToZip(self, output_zip):
    623     """Writes the payload to the given zip.
    624 
    625     Args:
    626       output_zip: The output ZipFile instance.
    627     """
    628     assert self.payload_file is not None
    629     assert self.payload_properties is not None
    630 
    631     if self.secondary:
    632       payload_arcname = Payload.SECONDARY_PAYLOAD_BIN
    633       payload_properties_arcname = Payload.SECONDARY_PAYLOAD_PROPERTIES_TXT
    634     else:
    635       payload_arcname = Payload.PAYLOAD_BIN
    636       payload_properties_arcname = Payload.PAYLOAD_PROPERTIES_TXT
    637 
    638     # Add the signed payload file and properties into the zip. In order to
    639     # support streaming, we pack them as ZIP_STORED. So these entries can be
    640     # read directly with the offset and length pairs.
    641     common.ZipWrite(output_zip, self.payload_file, arcname=payload_arcname,
    642                     compress_type=zipfile.ZIP_STORED)
    643     common.ZipWrite(output_zip, self.payload_properties,
    644                     arcname=payload_properties_arcname,
    645                     compress_type=zipfile.ZIP_STORED)
    646 
    647 
    648 def SignOutput(temp_zip_name, output_zip_name):
    649   pw = OPTIONS.key_passwords[OPTIONS.package_key]
    650 
    651   common.SignFile(temp_zip_name, output_zip_name, OPTIONS.package_key, pw,
    652                   whole_file=True)
    653 
    654 
    655 def _LoadOemDicts(oem_source):
    656   """Returns the list of loaded OEM properties dict."""
    657   if not oem_source:
    658     return None
    659 
    660   oem_dicts = []
    661   for oem_file in oem_source:
    662     with open(oem_file) as fp:
    663       oem_dicts.append(common.LoadDictionaryFromLines(fp.readlines()))
    664   return oem_dicts
    665 
    666 
    667 def _WriteRecoveryImageToBoot(script, output_zip):
    668   """Find and write recovery image to /boot in two-step OTA.
    669 
    670   In two-step OTAs, we write recovery image to /boot as the first step so that
    671   we can reboot to there and install a new recovery image to /recovery.
    672   A special "recovery-two-step.img" will be preferred, which encodes the correct
    673   path of "/boot". Otherwise the device may show "device is corrupt" message
    674   when booting into /boot.
    675 
    676   Fall back to using the regular recovery.img if the two-step recovery image
    677   doesn't exist. Note that rebuilding the special image at this point may be
    678   infeasible, because we don't have the desired boot signer and keys when
    679   calling ota_from_target_files.py.
    680   """
    681 
    682   recovery_two_step_img_name = "recovery-two-step.img"
    683   recovery_two_step_img_path = os.path.join(
    684       OPTIONS.input_tmp, "IMAGES", recovery_two_step_img_name)
    685   if os.path.exists(recovery_two_step_img_path):
    686     recovery_two_step_img = common.GetBootableImage(
    687         recovery_two_step_img_name, recovery_two_step_img_name,
    688         OPTIONS.input_tmp, "RECOVERY")
    689     common.ZipWriteStr(
    690         output_zip, recovery_two_step_img_name, recovery_two_step_img.data)
    691     logger.info(
    692         "two-step package: using %s in stage 1/3", recovery_two_step_img_name)
    693     script.WriteRawImage("/boot", recovery_two_step_img_name)
    694   else:
    695     logger.info("two-step package: using recovery.img in stage 1/3")
    696     # The "recovery.img" entry has been written into package earlier.
    697     script.WriteRawImage("/boot", "recovery.img")
    698 
    699 
    700 def HasRecoveryPatch(target_files_zip):
    701   namelist = [name for name in target_files_zip.namelist()]
    702   return ("SYSTEM/recovery-from-boot.p" in namelist or
    703           "SYSTEM/etc/recovery.img" in namelist)
    704 
    705 
    706 def HasPartition(target_files_zip, partition):
    707   try:
    708     target_files_zip.getinfo(partition.upper() + "/")
    709     return True
    710   except KeyError:
    711     return False
    712 
    713 
    714 def HasVendorPartition(target_files_zip):
    715   return HasPartition(target_files_zip, "vendor")
    716 
    717 
    718 def HasProductPartition(target_files_zip):
    719   return HasPartition(target_files_zip, "product")
    720 
    721 
    722 def HasOdmPartition(target_files_zip):
    723   return HasPartition(target_files_zip, "odm")
    724 
    725 
    726 def HasTrebleEnabled(target_files_zip, target_info):
    727   return (HasVendorPartition(target_files_zip) and
    728           target_info.GetBuildProp("ro.treble.enabled") == "true")
    729 
    730 
    731 def WriteFingerprintAssertion(script, target_info, source_info):
    732   source_oem_props = source_info.oem_props
    733   target_oem_props = target_info.oem_props
    734 
    735   if source_oem_props is None and target_oem_props is None:
    736     script.AssertSomeFingerprint(
    737         source_info.fingerprint, target_info.fingerprint)
    738   elif source_oem_props is not None and target_oem_props is not None:
    739     script.AssertSomeThumbprint(
    740         target_info.GetBuildProp("ro.build.thumbprint"),
    741         source_info.GetBuildProp("ro.build.thumbprint"))
    742   elif source_oem_props is None and target_oem_props is not None:
    743     script.AssertFingerprintOrThumbprint(
    744         source_info.fingerprint,
    745         target_info.GetBuildProp("ro.build.thumbprint"))
    746   else:
    747     script.AssertFingerprintOrThumbprint(
    748         target_info.fingerprint,
    749         source_info.GetBuildProp("ro.build.thumbprint"))
    750 
    751 
    752 def AddCompatibilityArchiveIfTrebleEnabled(target_zip, output_zip, target_info,
    753                                            source_info=None):
    754   """Adds compatibility info into the output zip if it's Treble-enabled target.
    755 
    756   Metadata used for on-device compatibility verification is retrieved from
    757   target_zip then added to compatibility.zip which is added to the output_zip
    758   archive.
    759 
    760   Compatibility archive should only be included for devices that have enabled
    761   Treble support.
    762 
    763   Args:
    764     target_zip: Zip file containing the source files to be included for OTA.
    765     output_zip: Zip file that will be sent for OTA.
    766     target_info: The BuildInfo instance that holds the target build info.
    767     source_info: The BuildInfo instance that holds the source build info, if
    768         generating an incremental OTA; None otherwise.
    769   """
    770 
    771   def AddCompatibilityArchive(framework_updated, device_updated):
    772     """Adds compatibility info based on update status of both sides of Treble
    773     boundary.
    774 
    775     Args:
    776       framework_updated: If True, the system / product image will be updated
    777           and therefore their metadata should be included.
    778       device_updated: If True, the vendor / odm image will be updated and
    779           therefore their metadata should be included.
    780     """
    781     # Determine what metadata we need. Files are names relative to META/.
    782     compatibility_files = []
    783     device_metadata = ("vendor_manifest.xml", "vendor_matrix.xml")
    784     framework_metadata = ("system_manifest.xml", "system_matrix.xml")
    785     if device_updated:
    786       compatibility_files += device_metadata
    787     if framework_updated:
    788       compatibility_files += framework_metadata
    789 
    790     # Create new archive.
    791     compatibility_archive = tempfile.NamedTemporaryFile()
    792     compatibility_archive_zip = zipfile.ZipFile(
    793         compatibility_archive, "w", compression=zipfile.ZIP_DEFLATED)
    794 
    795     # Add metadata.
    796     for file_name in compatibility_files:
    797       target_file_name = "META/" + file_name
    798 
    799       if target_file_name in target_zip.namelist():
    800         data = target_zip.read(target_file_name)
    801         common.ZipWriteStr(compatibility_archive_zip, file_name, data)
    802 
    803     # Ensure files are written before we copy into output_zip.
    804     compatibility_archive_zip.close()
    805 
    806     # Only add the archive if we have any compatibility info.
    807     if compatibility_archive_zip.namelist():
    808       common.ZipWrite(output_zip, compatibility_archive.name,
    809                       arcname="compatibility.zip",
    810                       compress_type=zipfile.ZIP_STORED)
    811 
    812   def FingerprintChanged(source_fp, target_fp):
    813     if source_fp is None or target_fp is None:
    814       return True
    815     return source_fp != target_fp
    816 
    817   # Will only proceed if the target has enabled the Treble support (as well as
    818   # having a /vendor partition).
    819   if not HasTrebleEnabled(target_zip, target_info):
    820     return
    821 
    822   # Skip adding the compatibility package as a workaround for b/114240221. The
    823   # compatibility will always fail on devices without qualified kernels.
    824   if OPTIONS.skip_compatibility_check:
    825     return
    826 
    827   # Full OTA carries the info for system/vendor/product/odm
    828   if source_info is None:
    829     AddCompatibilityArchive(True, True)
    830     return
    831 
    832   source_fp = source_info.fingerprint
    833   target_fp = target_info.fingerprint
    834   system_updated = source_fp != target_fp
    835 
    836   # other build fingerprints could be possibly blacklisted at build time. For
    837   # such a case, we consider those images being changed.
    838   vendor_updated = FingerprintChanged(source_info.vendor_fingerprint,
    839                                       target_info.vendor_fingerprint)
    840   product_updated = HasProductPartition(target_zip) and \
    841                     FingerprintChanged(source_info.product_fingerprint,
    842                                        target_info.product_fingerprint)
    843   odm_updated = HasOdmPartition(target_zip) and \
    844                 FingerprintChanged(source_info.odm_fingerprint,
    845                                    target_info.odm_fingerprint)
    846 
    847   AddCompatibilityArchive(system_updated or product_updated,
    848                           vendor_updated or odm_updated)
    849 
    850 
    851 def WriteFullOTAPackage(input_zip, output_file):
    852   target_info = BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
    853 
    854   # We don't know what version it will be installed on top of. We expect the API
    855   # just won't change very often. Similarly for fstab, it might have changed in
    856   # the target build.
    857   target_api_version = target_info["recovery_api_version"]
    858   script = edify_generator.EdifyGenerator(target_api_version, target_info)
    859 
    860   if target_info.oem_props and not OPTIONS.oem_no_mount:
    861     target_info.WriteMountOemScript(script)
    862 
    863   metadata = GetPackageMetadata(target_info)
    864 
    865   if not OPTIONS.no_signing:
    866     staging_file = common.MakeTempFile(suffix='.zip')
    867   else:
    868     staging_file = output_file
    869 
    870   output_zip = zipfile.ZipFile(
    871       staging_file, "w", compression=zipfile.ZIP_DEFLATED)
    872 
    873   device_specific = common.DeviceSpecificParams(
    874       input_zip=input_zip,
    875       input_version=target_api_version,
    876       output_zip=output_zip,
    877       script=script,
    878       input_tmp=OPTIONS.input_tmp,
    879       metadata=metadata,
    880       info_dict=OPTIONS.info_dict)
    881 
    882   assert HasRecoveryPatch(input_zip)
    883 
    884   # Assertions (e.g. downgrade check, device properties check).
    885   ts = target_info.GetBuildProp("ro.build.date.utc")
    886   ts_text = target_info.GetBuildProp("ro.build.date")
    887   script.AssertOlderBuild(ts, ts_text)
    888 
    889   target_info.WriteDeviceAssertions(script, OPTIONS.oem_no_mount)
    890   device_specific.FullOTA_Assertions()
    891 
    892   # Two-step package strategy (in chronological order, which is *not*
    893   # the order in which the generated script has things):
    894   #
    895   # if stage is not "2/3" or "3/3":
    896   #    write recovery image to boot partition
    897   #    set stage to "2/3"
    898   #    reboot to boot partition and restart recovery
    899   # else if stage is "2/3":
    900   #    write recovery image to recovery partition
    901   #    set stage to "3/3"
    902   #    reboot to recovery partition and restart recovery
    903   # else:
    904   #    (stage must be "3/3")
    905   #    set stage to ""
    906   #    do normal full package installation:
    907   #       wipe and install system, boot image, etc.
    908   #       set up system to update recovery partition on first boot
    909   #    complete script normally
    910   #    (allow recovery to mark itself finished and reboot)
    911 
    912   recovery_img = common.GetBootableImage("recovery.img", "recovery.img",
    913                                          OPTIONS.input_tmp, "RECOVERY")
    914   if OPTIONS.two_step:
    915     if not target_info.get("multistage_support"):
    916       assert False, "two-step packages not supported by this build"
    917     fs = target_info["fstab"]["/misc"]
    918     assert fs.fs_type.upper() == "EMMC", \
    919         "two-step packages only supported on devices with EMMC /misc partitions"
    920     bcb_dev = {"bcb_dev": fs.device}
    921     common.ZipWriteStr(output_zip, "recovery.img", recovery_img.data)
    922     script.AppendExtra("""
    923 if get_stage("%(bcb_dev)s") == "2/3" then
    924 """ % bcb_dev)
    925 
    926     # Stage 2/3: Write recovery image to /recovery (currently running /boot).
    927     script.Comment("Stage 2/3")
    928     script.WriteRawImage("/recovery", "recovery.img")
    929     script.AppendExtra("""
    930 set_stage("%(bcb_dev)s", "3/3");
    931 reboot_now("%(bcb_dev)s", "recovery");
    932 else if get_stage("%(bcb_dev)s") == "3/3" then
    933 """ % bcb_dev)
    934 
    935     # Stage 3/3: Make changes.
    936     script.Comment("Stage 3/3")
    937 
    938   # Dump fingerprints
    939   script.Print("Target: {}".format(target_info.fingerprint))
    940 
    941   device_specific.FullOTA_InstallBegin()
    942 
    943   system_progress = 0.75
    944 
    945   if OPTIONS.wipe_user_data:
    946     system_progress -= 0.1
    947   if HasVendorPartition(input_zip):
    948     system_progress -= 0.1
    949 
    950   script.ShowProgress(system_progress, 0)
    951 
    952   def GetBlockDifference(partition):
    953     # Full OTA is done as an "incremental" against an empty source image. This
    954     # has the effect of writing new data from the package to the entire
    955     # partition, but lets us reuse the updater code that writes incrementals to
    956     # do it.
    957     tgt = common.GetUserImage(partition, OPTIONS.input_tmp, input_zip,
    958                               info_dict=target_info,
    959                               reset_file_map=True)
    960     diff = common.BlockDifference(partition, tgt, src=None)
    961     return diff
    962 
    963   device_specific_diffs = device_specific.FullOTA_GetBlockDifferences()
    964   if device_specific_diffs:
    965     assert all(isinstance(diff, common.BlockDifference)
    966                for diff in device_specific_diffs), \
    967         "FullOTA_GetBlockDifferences is not returning a list of " \
    968         "BlockDifference objects"
    969 
    970   progress_dict = dict()
    971   block_diffs = [GetBlockDifference("system")]
    972   if HasVendorPartition(input_zip):
    973     block_diffs.append(GetBlockDifference("vendor"))
    974     progress_dict["vendor"] = 0.1
    975   if device_specific_diffs:
    976     block_diffs += device_specific_diffs
    977 
    978   if target_info.get('use_dynamic_partitions') == "true":
    979     # Use empty source_info_dict to indicate that all partitions / groups must
    980     # be re-added.
    981     dynamic_partitions_diff = common.DynamicPartitionsDifference(
    982         info_dict=OPTIONS.info_dict,
    983         block_diffs=block_diffs,
    984         progress_dict=progress_dict)
    985     dynamic_partitions_diff.WriteScript(script, output_zip,
    986                                         write_verify_script=OPTIONS.verify)
    987   else:
    988     for block_diff in block_diffs:
    989       block_diff.WriteScript(script, output_zip,
    990                              progress=progress_dict.get(block_diff.partition),
    991                              write_verify_script=OPTIONS.verify)
    992 
    993   AddCompatibilityArchiveIfTrebleEnabled(input_zip, output_zip, target_info)
    994 
    995   boot_img = common.GetBootableImage(
    996       "boot.img", "boot.img", OPTIONS.input_tmp, "BOOT")
    997   common.CheckSize(boot_img.data, "boot.img", target_info)
    998   common.ZipWriteStr(output_zip, "boot.img", boot_img.data)
    999 
   1000   script.ShowProgress(0.05, 5)
   1001   script.WriteRawImage("/boot", "boot.img")
   1002 
   1003   script.ShowProgress(0.2, 10)
   1004   device_specific.FullOTA_InstallEnd()
   1005 
   1006   if OPTIONS.extra_script is not None:
   1007     script.AppendExtra(OPTIONS.extra_script)
   1008 
   1009   script.UnmountAll()
   1010 
   1011   if OPTIONS.wipe_user_data:
   1012     script.ShowProgress(0.1, 10)
   1013     script.FormatPartition("/data")
   1014 
   1015   if OPTIONS.two_step:
   1016     script.AppendExtra("""
   1017 set_stage("%(bcb_dev)s", "");
   1018 """ % bcb_dev)
   1019     script.AppendExtra("else\n")
   1020 
   1021     # Stage 1/3: Nothing to verify for full OTA. Write recovery image to /boot.
   1022     script.Comment("Stage 1/3")
   1023     _WriteRecoveryImageToBoot(script, output_zip)
   1024 
   1025     script.AppendExtra("""
   1026 set_stage("%(bcb_dev)s", "2/3");
   1027 reboot_now("%(bcb_dev)s", "");
   1028 endif;
   1029 endif;
   1030 """ % bcb_dev)
   1031 
   1032   script.SetProgress(1)
   1033   script.AddToZip(input_zip, output_zip, input_path=OPTIONS.updater_binary)
   1034   metadata["ota-required-cache"] = str(script.required_cache)
   1035 
   1036   # We haven't written the metadata entry, which will be done in
   1037   # FinalizeMetadata.
   1038   common.ZipClose(output_zip)
   1039 
   1040   needed_property_files = (
   1041       NonAbOtaPropertyFiles(),
   1042   )
   1043   FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
   1044 
   1045 
   1046 def WriteMetadata(metadata, output):
   1047   """Writes the metadata to the zip archive or a file.
   1048 
   1049   Args:
   1050     metadata: The metadata dict for the package.
   1051     output: A ZipFile object or a string of the output file path.
   1052   """
   1053 
   1054   value = "".join(["%s=%s\n" % kv for kv in sorted(metadata.iteritems())])
   1055   if isinstance(output, zipfile.ZipFile):
   1056     common.ZipWriteStr(output, METADATA_NAME, value,
   1057                        compress_type=zipfile.ZIP_STORED)
   1058     return
   1059 
   1060   with open(output, 'w') as f:
   1061     f.write(value)
   1062 
   1063 
   1064 def HandleDowngradeMetadata(metadata, target_info, source_info):
   1065   # Only incremental OTAs are allowed to reach here.
   1066   assert OPTIONS.incremental_source is not None
   1067 
   1068   post_timestamp = target_info.GetBuildProp("ro.build.date.utc")
   1069   pre_timestamp = source_info.GetBuildProp("ro.build.date.utc")
   1070   is_downgrade = long(post_timestamp) < long(pre_timestamp)
   1071 
   1072   if OPTIONS.downgrade:
   1073     if not is_downgrade:
   1074       raise RuntimeError(
   1075           "--downgrade or --override_timestamp specified but no downgrade "
   1076           "detected: pre: %s, post: %s" % (pre_timestamp, post_timestamp))
   1077     metadata["ota-downgrade"] = "yes"
   1078   else:
   1079     if is_downgrade:
   1080       raise RuntimeError(
   1081           "Downgrade detected based on timestamp check: pre: %s, post: %s. "
   1082           "Need to specify --override_timestamp OR --downgrade to allow "
   1083           "building the incremental." % (pre_timestamp, post_timestamp))
   1084 
   1085 
   1086 def GetPackageMetadata(target_info, source_info=None):
   1087   """Generates and returns the metadata dict.
   1088 
   1089   It generates a dict() that contains the info to be written into an OTA
   1090   package (META-INF/com/android/metadata). It also handles the detection of
   1091   downgrade / data wipe based on the global options.
   1092 
   1093   Args:
   1094     target_info: The BuildInfo instance that holds the target build info.
   1095     source_info: The BuildInfo instance that holds the source build info, or
   1096         None if generating full OTA.
   1097 
   1098   Returns:
   1099     A dict to be written into package metadata entry.
   1100   """
   1101   assert isinstance(target_info, BuildInfo)
   1102   assert source_info is None or isinstance(source_info, BuildInfo)
   1103 
   1104   metadata = {
   1105       'post-build' : target_info.fingerprint,
   1106       'post-build-incremental' : target_info.GetBuildProp(
   1107           'ro.build.version.incremental'),
   1108       'post-sdk-level' : target_info.GetBuildProp(
   1109           'ro.build.version.sdk'),
   1110       'post-security-patch-level' : target_info.GetBuildProp(
   1111           'ro.build.version.security_patch'),
   1112   }
   1113 
   1114   if target_info.is_ab:
   1115     metadata['ota-type'] = 'AB'
   1116     metadata['ota-required-cache'] = '0'
   1117   else:
   1118     metadata['ota-type'] = 'BLOCK'
   1119 
   1120   if OPTIONS.wipe_user_data:
   1121     metadata['ota-wipe'] = 'yes'
   1122 
   1123   if OPTIONS.retrofit_dynamic_partitions:
   1124     metadata['ota-retrofit-dynamic-partitions'] = 'yes'
   1125 
   1126   is_incremental = source_info is not None
   1127   if is_incremental:
   1128     metadata['pre-build'] = source_info.fingerprint
   1129     metadata['pre-build-incremental'] = source_info.GetBuildProp(
   1130         'ro.build.version.incremental')
   1131     metadata['pre-device'] = source_info.device
   1132   else:
   1133     metadata['pre-device'] = target_info.device
   1134 
   1135   # Use the actual post-timestamp, even for a downgrade case.
   1136   metadata['post-timestamp'] = target_info.GetBuildProp('ro.build.date.utc')
   1137 
   1138   # Detect downgrades and set up downgrade flags accordingly.
   1139   if is_incremental:
   1140     HandleDowngradeMetadata(metadata, target_info, source_info)
   1141 
   1142   return metadata
   1143 
   1144 
   1145 class PropertyFiles(object):
   1146   """A class that computes the property-files string for an OTA package.
   1147 
   1148   A property-files string is a comma-separated string that contains the
   1149   offset/size info for an OTA package. The entries, which must be ZIP_STORED,
   1150   can be fetched directly with the package URL along with the offset/size info.
   1151   These strings can be used for streaming A/B OTAs, or allowing an updater to
   1152   download package metadata entry directly, without paying the cost of
   1153   downloading entire package.
   1154 
   1155   Computing the final property-files string requires two passes. Because doing
   1156   the whole package signing (with signapk.jar) will possibly reorder the ZIP
   1157   entries, which may in turn invalidate earlier computed ZIP entry offset/size
   1158   values.
   1159 
   1160   This class provides functions to be called for each pass. The general flow is
   1161   as follows.
   1162 
   1163     property_files = PropertyFiles()
   1164     # The first pass, which writes placeholders before doing initial signing.
   1165     property_files.Compute()
   1166     SignOutput()
   1167 
   1168     # The second pass, by replacing the placeholders with actual data.
   1169     property_files.Finalize()
   1170     SignOutput()
   1171 
   1172   And the caller can additionally verify the final result.
   1173 
   1174     property_files.Verify()
   1175   """
   1176 
   1177   def __init__(self):
   1178     self.name = None
   1179     self.required = ()
   1180     self.optional = ()
   1181 
   1182   def Compute(self, input_zip):
   1183     """Computes and returns a property-files string with placeholders.
   1184 
   1185     We reserve extra space for the offset and size of the metadata entry itself,
   1186     although we don't know the final values until the package gets signed.
   1187 
   1188     Args:
   1189       input_zip: The input ZIP file.
   1190 
   1191     Returns:
   1192       A string with placeholders for the metadata offset/size info, e.g.
   1193       "payload.bin:679:343,payload_properties.txt:378:45,metadata:        ".
   1194     """
   1195     return self.GetPropertyFilesString(input_zip, reserve_space=True)
   1196 
   1197   class InsufficientSpaceException(Exception):
   1198     pass
   1199 
   1200   def Finalize(self, input_zip, reserved_length):
   1201     """Finalizes a property-files string with actual METADATA offset/size info.
   1202 
   1203     The input ZIP file has been signed, with the ZIP entries in the desired
   1204     place (signapk.jar will possibly reorder the ZIP entries). Now we compute
   1205     the ZIP entry offsets and construct the property-files string with actual
   1206     data. Note that during this process, we must pad the property-files string
   1207     to the reserved length, so that the METADATA entry size remains the same.
   1208     Otherwise the entries' offsets and sizes may change again.
   1209 
   1210     Args:
   1211       input_zip: The input ZIP file.
   1212       reserved_length: The reserved length of the property-files string during
   1213           the call to Compute(). The final string must be no more than this
   1214           size.
   1215 
   1216     Returns:
   1217       A property-files string including the metadata offset/size info, e.g.
   1218       "payload.bin:679:343,payload_properties.txt:378:45,metadata:69:379  ".
   1219 
   1220     Raises:
   1221       InsufficientSpaceException: If the reserved length is insufficient to hold
   1222           the final string.
   1223     """
   1224     result = self.GetPropertyFilesString(input_zip, reserve_space=False)
   1225     if len(result) > reserved_length:
   1226       raise self.InsufficientSpaceException(
   1227           'Insufficient reserved space: reserved={}, actual={}'.format(
   1228               reserved_length, len(result)))
   1229 
   1230     result += ' ' * (reserved_length - len(result))
   1231     return result
   1232 
   1233   def Verify(self, input_zip, expected):
   1234     """Verifies the input ZIP file contains the expected property-files string.
   1235 
   1236     Args:
   1237       input_zip: The input ZIP file.
   1238       expected: The property-files string that's computed from Finalize().
   1239 
   1240     Raises:
   1241       AssertionError: On finding a mismatch.
   1242     """
   1243     actual = self.GetPropertyFilesString(input_zip)
   1244     assert actual == expected, \
   1245         "Mismatching streaming metadata: {} vs {}.".format(actual, expected)
   1246 
   1247   def GetPropertyFilesString(self, zip_file, reserve_space=False):
   1248     """
   1249     Constructs the property-files string per request.
   1250 
   1251     Args:
   1252       zip_file: The input ZIP file.
   1253       reserved_length: The reserved length of the property-files string.
   1254 
   1255     Returns:
   1256       A property-files string including the metadata offset/size info, e.g.
   1257       "payload.bin:679:343,payload_properties.txt:378:45,metadata:     ".
   1258     """
   1259 
   1260     def ComputeEntryOffsetSize(name):
   1261       """Computes the zip entry offset and size."""
   1262       info = zip_file.getinfo(name)
   1263       offset = info.header_offset
   1264       offset += zipfile.sizeFileHeader
   1265       offset += len(info.extra) + len(info.filename)
   1266       size = info.file_size
   1267       return '%s:%d:%d' % (os.path.basename(name), offset, size)
   1268 
   1269     tokens = []
   1270     tokens.extend(self._GetPrecomputed(zip_file))
   1271     for entry in self.required:
   1272       tokens.append(ComputeEntryOffsetSize(entry))
   1273     for entry in self.optional:
   1274       if entry in zip_file.namelist():
   1275         tokens.append(ComputeEntryOffsetSize(entry))
   1276 
   1277     # 'META-INF/com/android/metadata' is required. We don't know its actual
   1278     # offset and length (as well as the values for other entries). So we reserve
   1279     # 15-byte as a placeholder ('offset:length'), which is sufficient to cover
   1280     # the space for metadata entry. Because 'offset' allows a max of 10-digit
   1281     # (i.e. ~9 GiB), with a max of 4-digit for the length. Note that all the
   1282     # reserved space serves the metadata entry only.
   1283     if reserve_space:
   1284       tokens.append('metadata:' + ' ' * 15)
   1285     else:
   1286       tokens.append(ComputeEntryOffsetSize(METADATA_NAME))
   1287 
   1288     return ','.join(tokens)
   1289 
   1290   def _GetPrecomputed(self, input_zip):
   1291     """Computes the additional tokens to be included into the property-files.
   1292 
   1293     This applies to tokens without actual ZIP entries, such as
   1294     payload_metadadata.bin. We want to expose the offset/size to updaters, so
   1295     that they can download the payload metadata directly with the info.
   1296 
   1297     Args:
   1298       input_zip: The input zip file.
   1299 
   1300     Returns:
   1301       A list of strings (tokens) to be added to the property-files string.
   1302     """
   1303     # pylint: disable=no-self-use
   1304     # pylint: disable=unused-argument
   1305     return []
   1306 
   1307 
   1308 class StreamingPropertyFiles(PropertyFiles):
   1309   """A subclass for computing the property-files for streaming A/B OTAs."""
   1310 
   1311   def __init__(self):
   1312     super(StreamingPropertyFiles, self).__init__()
   1313     self.name = 'ota-streaming-property-files'
   1314     self.required = (
   1315         # payload.bin and payload_properties.txt must exist.
   1316         'payload.bin',
   1317         'payload_properties.txt',
   1318     )
   1319     self.optional = (
   1320         # care_map is available only if dm-verity is enabled.
   1321         'care_map.pb',
   1322         'care_map.txt',
   1323         # compatibility.zip is available only if target supports Treble.
   1324         'compatibility.zip',
   1325     )
   1326 
   1327 
   1328 class AbOtaPropertyFiles(StreamingPropertyFiles):
   1329   """The property-files for A/B OTA that includes payload_metadata.bin info.
   1330 
   1331   Since P, we expose one more token (aka property-file), in addition to the ones
   1332   for streaming A/B OTA, for a virtual entry of 'payload_metadata.bin'.
   1333   'payload_metadata.bin' is the header part of a payload ('payload.bin'), which
   1334   doesn't exist as a separate ZIP entry, but can be used to verify if the
   1335   payload can be applied on the given device.
   1336 
   1337   For backward compatibility, we keep both of the 'ota-streaming-property-files'
   1338   and the newly added 'ota-property-files' in P. The new token will only be
   1339   available in 'ota-property-files'.
   1340   """
   1341 
   1342   def __init__(self):
   1343     super(AbOtaPropertyFiles, self).__init__()
   1344     self.name = 'ota-property-files'
   1345 
   1346   def _GetPrecomputed(self, input_zip):
   1347     offset, size = self._GetPayloadMetadataOffsetAndSize(input_zip)
   1348     return ['payload_metadata.bin:{}:{}'.format(offset, size)]
   1349 
   1350   @staticmethod
   1351   def _GetPayloadMetadataOffsetAndSize(input_zip):
   1352     """Computes the offset and size of the payload metadata for a given package.
   1353 
   1354     (From system/update_engine/update_metadata.proto)
   1355     A delta update file contains all the deltas needed to update a system from
   1356     one specific version to another specific version. The update format is
   1357     represented by this struct pseudocode:
   1358 
   1359     struct delta_update_file {
   1360       char magic[4] = "CrAU";
   1361       uint64 file_format_version;
   1362       uint64 manifest_size;  // Size of protobuf DeltaArchiveManifest
   1363 
   1364       // Only present if format_version > 1:
   1365       uint32 metadata_signature_size;
   1366 
   1367       // The Bzip2 compressed DeltaArchiveManifest
   1368       char manifest[metadata_signature_size];
   1369 
   1370       // The signature of the metadata (from the beginning of the payload up to
   1371       // this location, not including the signature itself). This is a
   1372       // serialized Signatures message.
   1373       char medatada_signature_message[metadata_signature_size];
   1374 
   1375       // Data blobs for files, no specific format. The specific offset
   1376       // and length of each data blob is recorded in the DeltaArchiveManifest.
   1377       struct {
   1378         char data[];
   1379       } blobs[];
   1380 
   1381       // These two are not signed:
   1382       uint64 payload_signatures_message_size;
   1383       char payload_signatures_message[];
   1384     };
   1385 
   1386     'payload-metadata.bin' contains all the bytes from the beginning of the
   1387     payload, till the end of 'medatada_signature_message'.
   1388     """
   1389     payload_info = input_zip.getinfo('payload.bin')
   1390     payload_offset = payload_info.header_offset
   1391     payload_offset += zipfile.sizeFileHeader
   1392     payload_offset += len(payload_info.extra) + len(payload_info.filename)
   1393     payload_size = payload_info.file_size
   1394 
   1395     with input_zip.open('payload.bin', 'r') as payload_fp:
   1396       header_bin = payload_fp.read(24)
   1397 
   1398     # network byte order (big-endian)
   1399     header = struct.unpack("!IQQL", header_bin)
   1400 
   1401     # 'CrAU'
   1402     magic = header[0]
   1403     assert magic == 0x43724155, "Invalid magic: {:x}".format(magic)
   1404 
   1405     manifest_size = header[2]
   1406     metadata_signature_size = header[3]
   1407     metadata_total = 24 + manifest_size + metadata_signature_size
   1408     assert metadata_total < payload_size
   1409 
   1410     return (payload_offset, metadata_total)
   1411 
   1412 
   1413 class NonAbOtaPropertyFiles(PropertyFiles):
   1414   """The property-files for non-A/B OTA.
   1415 
   1416   For non-A/B OTA, the property-files string contains the info for METADATA
   1417   entry, with which a system updater can be fetched the package metadata prior
   1418   to downloading the entire package.
   1419   """
   1420 
   1421   def __init__(self):
   1422     super(NonAbOtaPropertyFiles, self).__init__()
   1423     self.name = 'ota-property-files'
   1424 
   1425 
   1426 def FinalizeMetadata(metadata, input_file, output_file, needed_property_files):
   1427   """Finalizes the metadata and signs an A/B OTA package.
   1428 
   1429   In order to stream an A/B OTA package, we need 'ota-streaming-property-files'
   1430   that contains the offsets and sizes for the ZIP entries. An example
   1431   property-files string is as follows.
   1432 
   1433     "payload.bin:679:343,payload_properties.txt:378:45,metadata:69:379"
   1434 
   1435   OTA server can pass down this string, in addition to the package URL, to the
   1436   system update client. System update client can then fetch individual ZIP
   1437   entries (ZIP_STORED) directly at the given offset of the URL.
   1438 
   1439   Args:
   1440     metadata: The metadata dict for the package.
   1441     input_file: The input ZIP filename that doesn't contain the package METADATA
   1442         entry yet.
   1443     output_file: The final output ZIP filename.
   1444     needed_property_files: The list of PropertyFiles' to be generated.
   1445   """
   1446 
   1447   def ComputeAllPropertyFiles(input_file, needed_property_files):
   1448     # Write the current metadata entry with placeholders.
   1449     with zipfile.ZipFile(input_file) as input_zip:
   1450       for property_files in needed_property_files:
   1451         metadata[property_files.name] = property_files.Compute(input_zip)
   1452       namelist = input_zip.namelist()
   1453 
   1454     if METADATA_NAME in namelist:
   1455       common.ZipDelete(input_file, METADATA_NAME)
   1456     output_zip = zipfile.ZipFile(input_file, 'a')
   1457     WriteMetadata(metadata, output_zip)
   1458     common.ZipClose(output_zip)
   1459 
   1460     if OPTIONS.no_signing:
   1461       return input_file
   1462 
   1463     prelim_signing = common.MakeTempFile(suffix='.zip')
   1464     SignOutput(input_file, prelim_signing)
   1465     return prelim_signing
   1466 
   1467   def FinalizeAllPropertyFiles(prelim_signing, needed_property_files):
   1468     with zipfile.ZipFile(prelim_signing) as prelim_signing_zip:
   1469       for property_files in needed_property_files:
   1470         metadata[property_files.name] = property_files.Finalize(
   1471             prelim_signing_zip, len(metadata[property_files.name]))
   1472 
   1473   # SignOutput(), which in turn calls signapk.jar, will possibly reorder the ZIP
   1474   # entries, as well as padding the entry headers. We do a preliminary signing
   1475   # (with an incomplete metadata entry) to allow that to happen. Then compute
   1476   # the ZIP entry offsets, write back the final metadata and do the final
   1477   # signing.
   1478   prelim_signing = ComputeAllPropertyFiles(input_file, needed_property_files)
   1479   try:
   1480     FinalizeAllPropertyFiles(prelim_signing, needed_property_files)
   1481   except PropertyFiles.InsufficientSpaceException:
   1482     # Even with the preliminary signing, the entry orders may change
   1483     # dramatically, which leads to insufficiently reserved space during the
   1484     # first call to ComputeAllPropertyFiles(). In that case, we redo all the
   1485     # preliminary signing works, based on the already ordered ZIP entries, to
   1486     # address the issue.
   1487     prelim_signing = ComputeAllPropertyFiles(
   1488         prelim_signing, needed_property_files)
   1489     FinalizeAllPropertyFiles(prelim_signing, needed_property_files)
   1490 
   1491   # Replace the METADATA entry.
   1492   common.ZipDelete(prelim_signing, METADATA_NAME)
   1493   output_zip = zipfile.ZipFile(prelim_signing, 'a')
   1494   WriteMetadata(metadata, output_zip)
   1495   common.ZipClose(output_zip)
   1496 
   1497   # Re-sign the package after updating the metadata entry.
   1498   if OPTIONS.no_signing:
   1499     output_file = prelim_signing
   1500   else:
   1501     SignOutput(prelim_signing, output_file)
   1502 
   1503   # Reopen the final signed zip to double check the streaming metadata.
   1504   with zipfile.ZipFile(output_file) as output_zip:
   1505     for property_files in needed_property_files:
   1506       property_files.Verify(output_zip, metadata[property_files.name].strip())
   1507 
   1508   # If requested, dump the metadata to a separate file.
   1509   output_metadata_path = OPTIONS.output_metadata_path
   1510   if output_metadata_path:
   1511     WriteMetadata(metadata, output_metadata_path)
   1512 
   1513 
   1514 def WriteBlockIncrementalOTAPackage(target_zip, source_zip, output_file):
   1515   target_info = BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
   1516   source_info = BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
   1517 
   1518   target_api_version = target_info["recovery_api_version"]
   1519   source_api_version = source_info["recovery_api_version"]
   1520   if source_api_version == 0:
   1521     logger.warning(
   1522         "Generating edify script for a source that can't install it.")
   1523 
   1524   script = edify_generator.EdifyGenerator(
   1525       source_api_version, target_info, fstab=source_info["fstab"])
   1526 
   1527   if target_info.oem_props or source_info.oem_props:
   1528     if not OPTIONS.oem_no_mount:
   1529       source_info.WriteMountOemScript(script)
   1530 
   1531   metadata = GetPackageMetadata(target_info, source_info)
   1532 
   1533   if not OPTIONS.no_signing:
   1534     staging_file = common.MakeTempFile(suffix='.zip')
   1535   else:
   1536     staging_file = output_file
   1537 
   1538   output_zip = zipfile.ZipFile(
   1539       staging_file, "w", compression=zipfile.ZIP_DEFLATED)
   1540 
   1541   device_specific = common.DeviceSpecificParams(
   1542       source_zip=source_zip,
   1543       source_version=source_api_version,
   1544       source_tmp=OPTIONS.source_tmp,
   1545       target_zip=target_zip,
   1546       target_version=target_api_version,
   1547       target_tmp=OPTIONS.target_tmp,
   1548       output_zip=output_zip,
   1549       script=script,
   1550       metadata=metadata,
   1551       info_dict=source_info)
   1552 
   1553   source_boot = common.GetBootableImage(
   1554       "/tmp/boot.img", "boot.img", OPTIONS.source_tmp, "BOOT", source_info)
   1555   target_boot = common.GetBootableImage(
   1556       "/tmp/boot.img", "boot.img", OPTIONS.target_tmp, "BOOT", target_info)
   1557   updating_boot = (not OPTIONS.two_step and
   1558                    (source_boot.data != target_boot.data))
   1559 
   1560   target_recovery = common.GetBootableImage(
   1561       "/tmp/recovery.img", "recovery.img", OPTIONS.target_tmp, "RECOVERY")
   1562 
   1563   # See notes in common.GetUserImage()
   1564   allow_shared_blocks = (source_info.get('ext4_share_dup_blocks') == "true" or
   1565                          target_info.get('ext4_share_dup_blocks') == "true")
   1566   system_src = common.GetUserImage("system", OPTIONS.source_tmp, source_zip,
   1567                                    info_dict=source_info,
   1568                                    allow_shared_blocks=allow_shared_blocks)
   1569 
   1570   hashtree_info_generator = verity_utils.CreateHashtreeInfoGenerator(
   1571       "system", 4096, target_info)
   1572   system_tgt = common.GetUserImage("system", OPTIONS.target_tmp, target_zip,
   1573                                    info_dict=target_info,
   1574                                    allow_shared_blocks=allow_shared_blocks,
   1575                                    hashtree_info_generator=
   1576                                    hashtree_info_generator)
   1577 
   1578   blockimgdiff_version = max(
   1579       int(i) for i in target_info.get("blockimgdiff_versions", "1").split(","))
   1580   assert blockimgdiff_version >= 3
   1581 
   1582   # Check the first block of the source system partition for remount R/W only
   1583   # if the filesystem is ext4.
   1584   system_src_partition = source_info["fstab"]["/system"]
   1585   check_first_block = system_src_partition.fs_type == "ext4"
   1586   # Disable using imgdiff for squashfs. 'imgdiff -z' expects input files to be
   1587   # in zip formats. However with squashfs, a) all files are compressed in LZ4;
   1588   # b) the blocks listed in block map may not contain all the bytes for a given
   1589   # file (because they're rounded to be 4K-aligned).
   1590   system_tgt_partition = target_info["fstab"]["/system"]
   1591   disable_imgdiff = (system_src_partition.fs_type == "squashfs" or
   1592                      system_tgt_partition.fs_type == "squashfs")
   1593   system_diff = common.BlockDifference("system", system_tgt, system_src,
   1594                                        check_first_block,
   1595                                        version=blockimgdiff_version,
   1596                                        disable_imgdiff=disable_imgdiff)
   1597 
   1598   if HasVendorPartition(target_zip):
   1599     if not HasVendorPartition(source_zip):
   1600       raise RuntimeError("can't generate incremental that adds /vendor")
   1601     vendor_src = common.GetUserImage("vendor", OPTIONS.source_tmp, source_zip,
   1602                                      info_dict=source_info,
   1603                                      allow_shared_blocks=allow_shared_blocks)
   1604     hashtree_info_generator = verity_utils.CreateHashtreeInfoGenerator(
   1605         "vendor", 4096, target_info)
   1606     vendor_tgt = common.GetUserImage(
   1607         "vendor", OPTIONS.target_tmp, target_zip,
   1608         info_dict=target_info,
   1609         allow_shared_blocks=allow_shared_blocks,
   1610         hashtree_info_generator=hashtree_info_generator)
   1611 
   1612     # Check first block of vendor partition for remount R/W only if
   1613     # disk type is ext4
   1614     vendor_partition = source_info["fstab"]["/vendor"]
   1615     check_first_block = vendor_partition.fs_type == "ext4"
   1616     disable_imgdiff = vendor_partition.fs_type == "squashfs"
   1617     vendor_diff = common.BlockDifference("vendor", vendor_tgt, vendor_src,
   1618                                          check_first_block,
   1619                                          version=blockimgdiff_version,
   1620                                          disable_imgdiff=disable_imgdiff)
   1621   else:
   1622     vendor_diff = None
   1623 
   1624   AddCompatibilityArchiveIfTrebleEnabled(
   1625       target_zip, output_zip, target_info, source_info)
   1626 
   1627   # Assertions (e.g. device properties check).
   1628   target_info.WriteDeviceAssertions(script, OPTIONS.oem_no_mount)
   1629   device_specific.IncrementalOTA_Assertions()
   1630 
   1631   # Two-step incremental package strategy (in chronological order,
   1632   # which is *not* the order in which the generated script has
   1633   # things):
   1634   #
   1635   # if stage is not "2/3" or "3/3":
   1636   #    do verification on current system
   1637   #    write recovery image to boot partition
   1638   #    set stage to "2/3"
   1639   #    reboot to boot partition and restart recovery
   1640   # else if stage is "2/3":
   1641   #    write recovery image to recovery partition
   1642   #    set stage to "3/3"
   1643   #    reboot to recovery partition and restart recovery
   1644   # else:
   1645   #    (stage must be "3/3")
   1646   #    perform update:
   1647   #       patch system files, etc.
   1648   #       force full install of new boot image
   1649   #       set up system to update recovery partition on first boot
   1650   #    complete script normally
   1651   #    (allow recovery to mark itself finished and reboot)
   1652 
   1653   if OPTIONS.two_step:
   1654     if not source_info.get("multistage_support"):
   1655       assert False, "two-step packages not supported by this build"
   1656     fs = source_info["fstab"]["/misc"]
   1657     assert fs.fs_type.upper() == "EMMC", \
   1658         "two-step packages only supported on devices with EMMC /misc partitions"
   1659     bcb_dev = {"bcb_dev" : fs.device}
   1660     common.ZipWriteStr(output_zip, "recovery.img", target_recovery.data)
   1661     script.AppendExtra("""
   1662 if get_stage("%(bcb_dev)s") == "2/3" then
   1663 """ % bcb_dev)
   1664 
   1665     # Stage 2/3: Write recovery image to /recovery (currently running /boot).
   1666     script.Comment("Stage 2/3")
   1667     script.AppendExtra("sleep(20);\n")
   1668     script.WriteRawImage("/recovery", "recovery.img")
   1669     script.AppendExtra("""
   1670 set_stage("%(bcb_dev)s", "3/3");
   1671 reboot_now("%(bcb_dev)s", "recovery");
   1672 else if get_stage("%(bcb_dev)s") != "3/3" then
   1673 """ % bcb_dev)
   1674 
   1675     # Stage 1/3: (a) Verify the current system.
   1676     script.Comment("Stage 1/3")
   1677 
   1678   # Dump fingerprints
   1679   script.Print("Source: {}".format(source_info.fingerprint))
   1680   script.Print("Target: {}".format(target_info.fingerprint))
   1681 
   1682   script.Print("Verifying current system...")
   1683 
   1684   device_specific.IncrementalOTA_VerifyBegin()
   1685 
   1686   WriteFingerprintAssertion(script, target_info, source_info)
   1687 
   1688   # Check the required cache size (i.e. stashed blocks).
   1689   size = []
   1690   if system_diff:
   1691     size.append(system_diff.required_cache)
   1692   if vendor_diff:
   1693     size.append(vendor_diff.required_cache)
   1694 
   1695   if updating_boot:
   1696     boot_type, boot_device = common.GetTypeAndDevice("/boot", source_info)
   1697     d = common.Difference(target_boot, source_boot)
   1698     _, _, d = d.ComputePatch()
   1699     if d is None:
   1700       include_full_boot = True
   1701       common.ZipWriteStr(output_zip, "boot.img", target_boot.data)
   1702     else:
   1703       include_full_boot = False
   1704 
   1705       logger.info(
   1706           "boot      target: %d  source: %d  diff: %d", target_boot.size,
   1707           source_boot.size, len(d))
   1708 
   1709       common.ZipWriteStr(output_zip, "boot.img.p", d)
   1710 
   1711       script.PatchPartitionCheck(
   1712           "{}:{}:{}:{}".format(
   1713               boot_type, boot_device, target_boot.size, target_boot.sha1),
   1714           "{}:{}:{}:{}".format(
   1715               boot_type, boot_device, source_boot.size, source_boot.sha1))
   1716 
   1717       size.append(target_boot.size)
   1718 
   1719   if size:
   1720     script.CacheFreeSpaceCheck(max(size))
   1721 
   1722   device_specific.IncrementalOTA_VerifyEnd()
   1723 
   1724   if OPTIONS.two_step:
   1725     # Stage 1/3: (b) Write recovery image to /boot.
   1726     _WriteRecoveryImageToBoot(script, output_zip)
   1727 
   1728     script.AppendExtra("""
   1729 set_stage("%(bcb_dev)s", "2/3");
   1730 reboot_now("%(bcb_dev)s", "");
   1731 else
   1732 """ % bcb_dev)
   1733 
   1734     # Stage 3/3: Make changes.
   1735     script.Comment("Stage 3/3")
   1736 
   1737   # Verify the existing partitions.
   1738   system_diff.WriteVerifyScript(script, touched_blocks_only=True)
   1739   if vendor_diff:
   1740     vendor_diff.WriteVerifyScript(script, touched_blocks_only=True)
   1741   device_specific_diffs = device_specific.IncrementalOTA_GetBlockDifferences()
   1742   if device_specific_diffs:
   1743     assert all(isinstance(diff, common.BlockDifference)
   1744                for diff in device_specific_diffs), \
   1745         "IncrementalOTA_GetBlockDifferences is not returning a list of " \
   1746         "BlockDifference objects"
   1747     for diff in device_specific_diffs:
   1748       diff.WriteVerifyScript(script, touched_blocks_only=True)
   1749 
   1750   script.Comment("---- start making changes here ----")
   1751 
   1752   device_specific.IncrementalOTA_InstallBegin()
   1753 
   1754   block_diffs = [system_diff]
   1755   progress_dict = {"system": 0.8 if vendor_diff else 0.9}
   1756   if vendor_diff:
   1757     block_diffs.append(vendor_diff)
   1758     progress_dict["vendor"] = 0.1
   1759   if device_specific_diffs:
   1760     block_diffs += device_specific_diffs
   1761 
   1762   if OPTIONS.source_info_dict.get("use_dynamic_partitions") == "true":
   1763     if OPTIONS.target_info_dict.get("use_dynamic_partitions") != "true":
   1764       raise RuntimeError(
   1765           "can't generate incremental that disables dynamic partitions")
   1766     dynamic_partitions_diff = common.DynamicPartitionsDifference(
   1767         info_dict=OPTIONS.target_info_dict,
   1768         source_info_dict=OPTIONS.source_info_dict,
   1769         block_diffs=block_diffs,
   1770         progress_dict=progress_dict)
   1771     dynamic_partitions_diff.WriteScript(
   1772         script, output_zip, write_verify_script=OPTIONS.verify)
   1773   else:
   1774     for block_diff in block_diffs:
   1775       block_diff.WriteScript(script, output_zip,
   1776                              progress=progress_dict.get(block_diff.partition),
   1777                              write_verify_script=OPTIONS.verify)
   1778 
   1779   if OPTIONS.two_step:
   1780     common.ZipWriteStr(output_zip, "boot.img", target_boot.data)
   1781     script.WriteRawImage("/boot", "boot.img")
   1782     logger.info("writing full boot image (forced by two-step mode)")
   1783 
   1784   if not OPTIONS.two_step:
   1785     if updating_boot:
   1786       if include_full_boot:
   1787         logger.info("boot image changed; including full.")
   1788         script.Print("Installing boot image...")
   1789         script.WriteRawImage("/boot", "boot.img")
   1790       else:
   1791         # Produce the boot image by applying a patch to the current
   1792         # contents of the boot partition, and write it back to the
   1793         # partition.
   1794         logger.info("boot image changed; including patch.")
   1795         script.Print("Patching boot image...")
   1796         script.ShowProgress(0.1, 10)
   1797         script.PatchPartition(
   1798             '{}:{}:{}:{}'.format(
   1799                 boot_type, boot_device, target_boot.size, target_boot.sha1),
   1800             '{}:{}:{}:{}'.format(
   1801                 boot_type, boot_device, source_boot.size, source_boot.sha1),
   1802             'boot.img.p')
   1803     else:
   1804       logger.info("boot image unchanged; skipping.")
   1805 
   1806   # Do device-specific installation (eg, write radio image).
   1807   device_specific.IncrementalOTA_InstallEnd()
   1808 
   1809   if OPTIONS.extra_script is not None:
   1810     script.AppendExtra(OPTIONS.extra_script)
   1811 
   1812   if OPTIONS.wipe_user_data:
   1813     script.Print("Erasing user data...")
   1814     script.FormatPartition("/data")
   1815 
   1816   if OPTIONS.two_step:
   1817     script.AppendExtra("""
   1818 set_stage("%(bcb_dev)s", "");
   1819 endif;
   1820 endif;
   1821 """ % bcb_dev)
   1822 
   1823   script.SetProgress(1)
   1824   # For downgrade OTAs, we prefer to use the update-binary in the source
   1825   # build that is actually newer than the one in the target build.
   1826   if OPTIONS.downgrade:
   1827     script.AddToZip(source_zip, output_zip, input_path=OPTIONS.updater_binary)
   1828   else:
   1829     script.AddToZip(target_zip, output_zip, input_path=OPTIONS.updater_binary)
   1830   metadata["ota-required-cache"] = str(script.required_cache)
   1831 
   1832   # We haven't written the metadata entry yet, which will be handled in
   1833   # FinalizeMetadata().
   1834   common.ZipClose(output_zip)
   1835 
   1836   # Sign the generated zip package unless no_signing is specified.
   1837   needed_property_files = (
   1838       NonAbOtaPropertyFiles(),
   1839   )
   1840   FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
   1841 
   1842 
   1843 def GetTargetFilesZipForSecondaryImages(input_file, skip_postinstall=False):
   1844   """Returns a target-files.zip file for generating secondary payload.
   1845 
   1846   Although the original target-files.zip already contains secondary slot
   1847   images (i.e. IMAGES/system_other.img), we need to rename the files to the
   1848   ones without _other suffix. Note that we cannot instead modify the names in
   1849   META/ab_partitions.txt, because there are no matching partitions on device.
   1850 
   1851   For the partitions that don't have secondary images, the ones for primary
   1852   slot will be used. This is to ensure that we always have valid boot, vbmeta,
   1853   bootloader images in the inactive slot.
   1854 
   1855   Args:
   1856     input_file: The input target-files.zip file.
   1857     skip_postinstall: Whether to skip copying the postinstall config file.
   1858 
   1859   Returns:
   1860     The filename of the target-files.zip for generating secondary payload.
   1861   """
   1862   target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
   1863   target_zip = zipfile.ZipFile(target_file, 'w', allowZip64=True)
   1864 
   1865   with zipfile.ZipFile(input_file, 'r') as input_zip:
   1866     infolist = input_zip.infolist()
   1867     namelist = input_zip.namelist()
   1868 
   1869   input_tmp = common.UnzipTemp(input_file, UNZIP_PATTERN)
   1870   for info in infolist:
   1871     unzipped_file = os.path.join(input_tmp, *info.filename.split('/'))
   1872     if info.filename == 'IMAGES/system_other.img':
   1873       common.ZipWrite(target_zip, unzipped_file, arcname='IMAGES/system.img')
   1874 
   1875     # Primary images and friends need to be skipped explicitly.
   1876     elif info.filename in ('IMAGES/system.img',
   1877                            'IMAGES/system.map'):
   1878       pass
   1879 
   1880     # Skip copying the postinstall config if requested.
   1881     elif skip_postinstall and info.filename == POSTINSTALL_CONFIG:
   1882       pass
   1883 
   1884     elif info.filename.startswith(('META/', 'IMAGES/', 'RADIO/')):
   1885       common.ZipWrite(target_zip, unzipped_file, arcname=info.filename)
   1886 
   1887   common.ZipClose(target_zip)
   1888 
   1889   return target_file
   1890 
   1891 
   1892 def GetTargetFilesZipWithoutPostinstallConfig(input_file):
   1893   """Returns a target-files.zip that's not containing postinstall_config.txt.
   1894 
   1895   This allows brillo_update_payload script to skip writing all the postinstall
   1896   hooks in the generated payload. The input target-files.zip file will be
   1897   duplicated, with 'META/postinstall_config.txt' skipped. If input_file doesn't
   1898   contain the postinstall_config.txt entry, the input file will be returned.
   1899 
   1900   Args:
   1901     input_file: The input target-files.zip filename.
   1902 
   1903   Returns:
   1904     The filename of target-files.zip that doesn't contain postinstall config.
   1905   """
   1906   # We should only make a copy if postinstall_config entry exists.
   1907   with zipfile.ZipFile(input_file, 'r') as input_zip:
   1908     if POSTINSTALL_CONFIG not in input_zip.namelist():
   1909       return input_file
   1910 
   1911   target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
   1912   shutil.copyfile(input_file, target_file)
   1913   common.ZipDelete(target_file, POSTINSTALL_CONFIG)
   1914   return target_file
   1915 
   1916 
   1917 def GetTargetFilesZipForRetrofitDynamicPartitions(input_file,
   1918                                                   super_block_devices,
   1919                                                   dynamic_partition_list):
   1920   """Returns a target-files.zip for retrofitting dynamic partitions.
   1921 
   1922   This allows brillo_update_payload to generate an OTA based on the exact
   1923   bits on the block devices. Postinstall is disabled.
   1924 
   1925   Args:
   1926     input_file: The input target-files.zip filename.
   1927     super_block_devices: The list of super block devices
   1928     dynamic_partition_list: The list of dynamic partitions
   1929 
   1930   Returns:
   1931     The filename of target-files.zip with *.img replaced with super_*.img for
   1932     each block device in super_block_devices.
   1933   """
   1934   assert super_block_devices, "No super_block_devices are specified."
   1935 
   1936   replace = {'OTA/super_{}.img'.format(dev): 'IMAGES/{}.img'.format(dev)
   1937              for dev in super_block_devices}
   1938 
   1939   target_file = common.MakeTempFile(prefix="targetfiles-", suffix=".zip")
   1940   shutil.copyfile(input_file, target_file)
   1941 
   1942   with zipfile.ZipFile(input_file, 'r') as input_zip:
   1943     namelist = input_zip.namelist()
   1944 
   1945   input_tmp = common.UnzipTemp(input_file, RETROFIT_DAP_UNZIP_PATTERN)
   1946 
   1947   # Remove partitions from META/ab_partitions.txt that is in
   1948   # dynamic_partition_list but not in super_block_devices so that
   1949   # brillo_update_payload won't generate update for those logical partitions.
   1950   ab_partitions_file = os.path.join(input_tmp, *AB_PARTITIONS.split('/'))
   1951   with open(ab_partitions_file) as f:
   1952     ab_partitions_lines = f.readlines()
   1953     ab_partitions = [line.strip() for line in ab_partitions_lines]
   1954   # Assert that all super_block_devices are in ab_partitions
   1955   super_device_not_updated = [partition for partition in super_block_devices
   1956                               if partition not in ab_partitions]
   1957   assert not super_device_not_updated, \
   1958       "{} is in super_block_devices but not in {}".format(
   1959           super_device_not_updated, AB_PARTITIONS)
   1960   # ab_partitions -= (dynamic_partition_list - super_block_devices)
   1961   new_ab_partitions = common.MakeTempFile(prefix="ab_partitions", suffix=".txt")
   1962   with open(new_ab_partitions, 'w') as f:
   1963     for partition in ab_partitions:
   1964       if (partition in dynamic_partition_list and
   1965           partition not in super_block_devices):
   1966           logger.info("Dropping %s from ab_partitions.txt", partition)
   1967           continue
   1968       f.write(partition + "\n")
   1969   to_delete = [AB_PARTITIONS]
   1970 
   1971   # Always skip postinstall for a retrofit update.
   1972   to_delete += [POSTINSTALL_CONFIG]
   1973 
   1974   # Delete dynamic_partitions_info.txt so that brillo_update_payload thinks this
   1975   # is a regular update on devices without dynamic partitions support.
   1976   to_delete += [DYNAMIC_PARTITION_INFO]
   1977 
   1978   # Remove the existing partition images as well as the map files.
   1979   to_delete += replace.values()
   1980   to_delete += ['IMAGES/{}.map'.format(dev) for dev in super_block_devices]
   1981 
   1982   common.ZipDelete(target_file, to_delete)
   1983 
   1984   target_zip = zipfile.ZipFile(target_file, 'a', allowZip64=True)
   1985 
   1986   # Write super_{foo}.img as {foo}.img.
   1987   for src, dst in replace.items():
   1988     assert src in namelist, \
   1989           'Missing {} in {}; {} cannot be written'.format(src, input_file, dst)
   1990     unzipped_file = os.path.join(input_tmp, *src.split('/'))
   1991     common.ZipWrite(target_zip, unzipped_file, arcname=dst)
   1992 
   1993   # Write new ab_partitions.txt file
   1994   common.ZipWrite(target_zip, new_ab_partitions, arcname=AB_PARTITIONS)
   1995 
   1996   common.ZipClose(target_zip)
   1997 
   1998   return target_file
   1999 
   2000 
   2001 def WriteABOTAPackageWithBrilloScript(target_file, output_file,
   2002                                       source_file=None):
   2003   """Generates an Android OTA package that has A/B update payload."""
   2004   # Stage the output zip package for package signing.
   2005   if not OPTIONS.no_signing:
   2006     staging_file = common.MakeTempFile(suffix='.zip')
   2007   else:
   2008     staging_file = output_file
   2009   output_zip = zipfile.ZipFile(staging_file, "w",
   2010                                compression=zipfile.ZIP_DEFLATED)
   2011 
   2012   if source_file is not None:
   2013     target_info = BuildInfo(OPTIONS.target_info_dict, OPTIONS.oem_dicts)
   2014     source_info = BuildInfo(OPTIONS.source_info_dict, OPTIONS.oem_dicts)
   2015   else:
   2016     target_info = BuildInfo(OPTIONS.info_dict, OPTIONS.oem_dicts)
   2017     source_info = None
   2018 
   2019   # Metadata to comply with Android OTA package format.
   2020   metadata = GetPackageMetadata(target_info, source_info)
   2021 
   2022   if OPTIONS.retrofit_dynamic_partitions:
   2023     target_file = GetTargetFilesZipForRetrofitDynamicPartitions(
   2024         target_file, target_info.get("super_block_devices").strip().split(),
   2025         target_info.get("dynamic_partition_list").strip().split())
   2026   elif OPTIONS.skip_postinstall:
   2027     target_file = GetTargetFilesZipWithoutPostinstallConfig(target_file)
   2028 
   2029   # Generate payload.
   2030   payload = Payload()
   2031 
   2032   # Enforce a max timestamp this payload can be applied on top of.
   2033   if OPTIONS.downgrade:
   2034     max_timestamp = source_info.GetBuildProp("ro.build.date.utc")
   2035   else:
   2036     max_timestamp = metadata["post-timestamp"]
   2037   additional_args = ["--max_timestamp", max_timestamp]
   2038 
   2039   payload.Generate(target_file, source_file, additional_args)
   2040 
   2041   # Sign the payload.
   2042   payload_signer = PayloadSigner()
   2043   payload.Sign(payload_signer)
   2044 
   2045   # Write the payload into output zip.
   2046   payload.WriteToZip(output_zip)
   2047 
   2048   # Generate and include the secondary payload that installs secondary images
   2049   # (e.g. system_other.img).
   2050   if OPTIONS.include_secondary:
   2051     # We always include a full payload for the secondary slot, even when
   2052     # building an incremental OTA. See the comments for "--include_secondary".
   2053     secondary_target_file = GetTargetFilesZipForSecondaryImages(
   2054         target_file, OPTIONS.skip_postinstall)
   2055     secondary_payload = Payload(secondary=True)
   2056     secondary_payload.Generate(secondary_target_file,
   2057                                additional_args=additional_args)
   2058     secondary_payload.Sign(payload_signer)
   2059     secondary_payload.WriteToZip(output_zip)
   2060 
   2061   # If dm-verity is supported for the device, copy contents of care_map
   2062   # into A/B OTA package.
   2063   target_zip = zipfile.ZipFile(target_file, "r")
   2064   if (target_info.get("verity") == "true" or
   2065       target_info.get("avb_enable") == "true"):
   2066     care_map_list = [x for x in ["care_map.pb", "care_map.txt"] if
   2067                      "META/" + x in target_zip.namelist()]
   2068 
   2069     # Adds care_map if either the protobuf format or the plain text one exists.
   2070     if care_map_list:
   2071       care_map_name = care_map_list[0]
   2072       care_map_data = target_zip.read("META/" + care_map_name)
   2073       # In order to support streaming, care_map needs to be packed as
   2074       # ZIP_STORED.
   2075       common.ZipWriteStr(output_zip, care_map_name, care_map_data,
   2076                          compress_type=zipfile.ZIP_STORED)
   2077     else:
   2078       logger.warning("Cannot find care map file in target_file package")
   2079 
   2080   AddCompatibilityArchiveIfTrebleEnabled(
   2081       target_zip, output_zip, target_info, source_info)
   2082 
   2083   common.ZipClose(target_zip)
   2084 
   2085   # We haven't written the metadata entry yet, which will be handled in
   2086   # FinalizeMetadata().
   2087   common.ZipClose(output_zip)
   2088 
   2089   # AbOtaPropertyFiles intends to replace StreamingPropertyFiles, as it covers
   2090   # all the info of the latter. However, system updaters and OTA servers need to
   2091   # take time to switch to the new flag. We keep both of the flags for
   2092   # P-timeframe, and will remove StreamingPropertyFiles in later release.
   2093   needed_property_files = (
   2094       AbOtaPropertyFiles(),
   2095       StreamingPropertyFiles(),
   2096   )
   2097   FinalizeMetadata(metadata, staging_file, output_file, needed_property_files)
   2098 
   2099 
   2100 def main(argv):
   2101 
   2102   def option_handler(o, a):
   2103     if o in ("-k", "--package_key"):
   2104       OPTIONS.package_key = a
   2105     elif o in ("-i", "--incremental_from"):
   2106       OPTIONS.incremental_source = a
   2107     elif o == "--full_radio":
   2108       OPTIONS.full_radio = True
   2109     elif o == "--full_bootloader":
   2110       OPTIONS.full_bootloader = True
   2111     elif o == "--wipe_user_data":
   2112       OPTIONS.wipe_user_data = True
   2113     elif o == "--downgrade":
   2114       OPTIONS.downgrade = True
   2115       OPTIONS.wipe_user_data = True
   2116     elif o == "--override_timestamp":
   2117       OPTIONS.downgrade = True
   2118     elif o in ("-o", "--oem_settings"):
   2119       OPTIONS.oem_source = a.split(',')
   2120     elif o == "--oem_no_mount":
   2121       OPTIONS.oem_no_mount = True
   2122     elif o in ("-e", "--extra_script"):
   2123       OPTIONS.extra_script = a
   2124     elif o in ("-t", "--worker_threads"):
   2125       if a.isdigit():
   2126         OPTIONS.worker_threads = int(a)
   2127       else:
   2128         raise ValueError("Cannot parse value %r for option %r - only "
   2129                          "integers are allowed." % (a, o))
   2130     elif o in ("-2", "--two_step"):
   2131       OPTIONS.two_step = True
   2132     elif o == "--include_secondary":
   2133       OPTIONS.include_secondary = True
   2134     elif o == "--no_signing":
   2135       OPTIONS.no_signing = True
   2136     elif o == "--verify":
   2137       OPTIONS.verify = True
   2138     elif o == "--block":
   2139       OPTIONS.block_based = True
   2140     elif o in ("-b", "--binary"):
   2141       OPTIONS.updater_binary = a
   2142     elif o == "--stash_threshold":
   2143       try:
   2144         OPTIONS.stash_threshold = float(a)
   2145       except ValueError:
   2146         raise ValueError("Cannot parse value %r for option %r - expecting "
   2147                          "a float" % (a, o))
   2148     elif o == "--log_diff":
   2149       OPTIONS.log_diff = a
   2150     elif o == "--payload_signer":
   2151       OPTIONS.payload_signer = a
   2152     elif o == "--payload_signer_args":
   2153       OPTIONS.payload_signer_args = shlex.split(a)
   2154     elif o == "--payload_signer_key_size":
   2155       OPTIONS.payload_signer_key_size = a
   2156     elif o == "--extracted_input_target_files":
   2157       OPTIONS.extracted_input = a
   2158     elif o == "--skip_postinstall":
   2159       OPTIONS.skip_postinstall = True
   2160     elif o == "--retrofit_dynamic_partitions":
   2161       OPTIONS.retrofit_dynamic_partitions = True
   2162     elif o == "--skip_compatibility_check":
   2163       OPTIONS.skip_compatibility_check = True
   2164     elif o == "--output_metadata_path":
   2165       OPTIONS.output_metadata_path = a
   2166     else:
   2167       return False
   2168     return True
   2169 
   2170   args = common.ParseOptions(argv, __doc__,
   2171                              extra_opts="b:k:i:d:e:t:2o:",
   2172                              extra_long_opts=[
   2173                                  "package_key=",
   2174                                  "incremental_from=",
   2175                                  "full_radio",
   2176                                  "full_bootloader",
   2177                                  "wipe_user_data",
   2178                                  "downgrade",
   2179                                  "override_timestamp",
   2180                                  "extra_script=",
   2181                                  "worker_threads=",
   2182                                  "two_step",
   2183                                  "include_secondary",
   2184                                  "no_signing",
   2185                                  "block",
   2186                                  "binary=",
   2187                                  "oem_settings=",
   2188                                  "oem_no_mount",
   2189                                  "verify",
   2190                                  "stash_threshold=",
   2191                                  "log_diff=",
   2192                                  "payload_signer=",
   2193                                  "payload_signer_args=",
   2194                                  "payload_signer_key_size=",
   2195                                  "extracted_input_target_files=",
   2196                                  "skip_postinstall",
   2197                                  "retrofit_dynamic_partitions",
   2198                                  "skip_compatibility_check",
   2199                                  "output_metadata_path=",
   2200                              ], extra_option_handler=option_handler)
   2201 
   2202   if len(args) != 2:
   2203     common.Usage(__doc__)
   2204     sys.exit(1)
   2205 
   2206   common.InitLogging()
   2207 
   2208   if OPTIONS.downgrade:
   2209     # We should only allow downgrading incrementals (as opposed to full).
   2210     # Otherwise the device may go back from arbitrary build with this full
   2211     # OTA package.
   2212     if OPTIONS.incremental_source is None:
   2213       raise ValueError("Cannot generate downgradable full OTAs")
   2214 
   2215   # Load the build info dicts from the zip directly or the extracted input
   2216   # directory. We don't need to unzip the entire target-files zips, because they
   2217   # won't be needed for A/B OTAs (brillo_update_payload does that on its own).
   2218   # When loading the info dicts, we don't need to provide the second parameter
   2219   # to common.LoadInfoDict(). Specifying the second parameter allows replacing
   2220   # some properties with their actual paths, such as 'selinux_fc',
   2221   # 'ramdisk_dir', which won't be used during OTA generation.
   2222   if OPTIONS.extracted_input is not None:
   2223     OPTIONS.info_dict = common.LoadInfoDict(OPTIONS.extracted_input)
   2224   else:
   2225     with zipfile.ZipFile(args[0], 'r') as input_zip:
   2226       OPTIONS.info_dict = common.LoadInfoDict(input_zip)
   2227 
   2228   logger.info("--- target info ---")
   2229   common.DumpInfoDict(OPTIONS.info_dict)
   2230 
   2231   # Load the source build dict if applicable.
   2232   if OPTIONS.incremental_source is not None:
   2233     OPTIONS.target_info_dict = OPTIONS.info_dict
   2234     with zipfile.ZipFile(OPTIONS.incremental_source, 'r') as source_zip:
   2235       OPTIONS.source_info_dict = common.LoadInfoDict(source_zip)
   2236 
   2237     logger.info("--- source info ---")
   2238     common.DumpInfoDict(OPTIONS.source_info_dict)
   2239 
   2240   # Load OEM dicts if provided.
   2241   OPTIONS.oem_dicts = _LoadOemDicts(OPTIONS.oem_source)
   2242 
   2243   # Assume retrofitting dynamic partitions when base build does not set
   2244   # use_dynamic_partitions but target build does.
   2245   if (OPTIONS.source_info_dict and
   2246       OPTIONS.source_info_dict.get("use_dynamic_partitions") != "true" and
   2247       OPTIONS.target_info_dict.get("use_dynamic_partitions") == "true"):
   2248     if OPTIONS.target_info_dict.get("dynamic_partition_retrofit") != "true":
   2249       raise common.ExternalError(
   2250           "Expect to generate incremental OTA for retrofitting dynamic "
   2251           "partitions, but dynamic_partition_retrofit is not set in target "
   2252           "build.")
   2253     logger.info("Implicitly generating retrofit incremental OTA.")
   2254     OPTIONS.retrofit_dynamic_partitions = True
   2255 
   2256   # Skip postinstall for retrofitting dynamic partitions.
   2257   if OPTIONS.retrofit_dynamic_partitions:
   2258     OPTIONS.skip_postinstall = True
   2259 
   2260   ab_update = OPTIONS.info_dict.get("ab_update") == "true"
   2261 
   2262   # Use the default key to sign the package if not specified with package_key.
   2263   # package_keys are needed on ab_updates, so always define them if an
   2264   # ab_update is getting created.
   2265   if not OPTIONS.no_signing or ab_update:
   2266     if OPTIONS.package_key is None:
   2267       OPTIONS.package_key = OPTIONS.info_dict.get(
   2268           "default_system_dev_certificate",
   2269           "build/target/product/security/testkey")
   2270     # Get signing keys
   2271     OPTIONS.key_passwords = common.GetKeyPasswords([OPTIONS.package_key])
   2272 
   2273   if ab_update:
   2274     WriteABOTAPackageWithBrilloScript(
   2275         target_file=args[0],
   2276         output_file=args[1],
   2277         source_file=OPTIONS.incremental_source)
   2278 
   2279     logger.info("done.")
   2280     return
   2281 
   2282   # Sanity check the loaded info dicts first.
   2283   if OPTIONS.info_dict.get("no_recovery") == "true":
   2284     raise common.ExternalError(
   2285         "--- target build has specified no recovery ---")
   2286 
   2287   # Non-A/B OTAs rely on /cache partition to store temporary files.
   2288   cache_size = OPTIONS.info_dict.get("cache_size")
   2289   if cache_size is None:
   2290     logger.warning("--- can't determine the cache partition size ---")
   2291   OPTIONS.cache_size = cache_size
   2292 
   2293   if OPTIONS.extra_script is not None:
   2294     OPTIONS.extra_script = open(OPTIONS.extra_script).read()
   2295 
   2296   if OPTIONS.extracted_input is not None:
   2297     OPTIONS.input_tmp = OPTIONS.extracted_input
   2298   else:
   2299     logger.info("unzipping target target-files...")
   2300     OPTIONS.input_tmp = common.UnzipTemp(args[0], UNZIP_PATTERN)
   2301   OPTIONS.target_tmp = OPTIONS.input_tmp
   2302 
   2303   # If the caller explicitly specified the device-specific extensions path via
   2304   # -s / --device_specific, use that. Otherwise, use META/releasetools.py if it
   2305   # is present in the target target_files. Otherwise, take the path of the file
   2306   # from 'tool_extensions' in the info dict and look for that in the local
   2307   # filesystem, relative to the current directory.
   2308   if OPTIONS.device_specific is None:
   2309     from_input = os.path.join(OPTIONS.input_tmp, "META", "releasetools.py")
   2310     if os.path.exists(from_input):
   2311       logger.info("(using device-specific extensions from target_files)")
   2312       OPTIONS.device_specific = from_input
   2313     else:
   2314       OPTIONS.device_specific = OPTIONS.info_dict.get("tool_extensions")
   2315 
   2316   if OPTIONS.device_specific is not None:
   2317     OPTIONS.device_specific = os.path.abspath(OPTIONS.device_specific)
   2318 
   2319   # Generate a full OTA.
   2320   if OPTIONS.incremental_source is None:
   2321     with zipfile.ZipFile(args[0], 'r') as input_zip:
   2322       WriteFullOTAPackage(
   2323           input_zip,
   2324           output_file=args[1])
   2325 
   2326   # Generate an incremental OTA.
   2327   else:
   2328     logger.info("unzipping source target-files...")
   2329     OPTIONS.source_tmp = common.UnzipTemp(
   2330         OPTIONS.incremental_source, UNZIP_PATTERN)
   2331     with zipfile.ZipFile(args[0], 'r') as input_zip, \
   2332         zipfile.ZipFile(OPTIONS.incremental_source, 'r') as source_zip:
   2333       WriteBlockIncrementalOTAPackage(
   2334           input_zip,
   2335           source_zip,
   2336           output_file=args[1])
   2337 
   2338     if OPTIONS.log_diff:
   2339       with open(OPTIONS.log_diff, 'w') as out_file:
   2340         import target_files_diff
   2341         target_files_diff.recursiveDiff(
   2342             '', OPTIONS.source_tmp, OPTIONS.input_tmp, out_file)
   2343 
   2344   logger.info("done.")
   2345 
   2346 
   2347 if __name__ == '__main__':
   2348   try:
   2349     common.CloseInheritedPipes()
   2350     main(sys.argv[1:])
   2351   except common.ExternalError:
   2352     logger.exception("\n   ERROR:\n")
   2353     sys.exit(1)
   2354   finally:
   2355     common.Cleanup()
   2356