Home | History | Annotate | Download | only in gslib

Lines Matching refs:And

14 # See the License for the specific language governing permissions and
144 # For debugging purposes; if True, files and objects that fail hash validation
155 one of their object names and then done an MD5 hash of the name, and
162 # filling in command output and manifests.
178 # of components and final object.
307 # We create a tuple with union of options needed by CopyHelper and any
395 subdir, and non-existent bucket subdir.
408 if ((exp_dst_url.IsFileUrl() and not exp_dst_url.IsDirectory()) or
409 (exp_dst_url.IsCloudUrl() and exp_dst_url.IsBucket()
410 and not have_existing_dst_container)):
423 depends on whether there are multiple sources in this request and whether
430 file to gs://bucket/abc/file. And regardless of whether gs://bucket/abc
433 we should copy file1 to gs://bucket/abc/file1 (and similarly for file2).
441 confusing) to have an object called gs://bucket/dir and
445 and gs://bucket/dir/file2.
464 (src_url_names_container and recursion_requested))
490 return (not have_multiple_srcs and
491 not have_existing_dest_subdir and
500 Uses context-dependent naming rules that mimic Linux cp and mv behavior.
523 source and source is a stream.
531 if exp_src_url.IsFileUrl() and exp_src_url.IsStream():
537 if not recursion_requested and not have_multiple_srcs:
558 (exp_dst_url.IsFileUrl() and exp_dst_url.IsDirectory()))
559 and not exp_dst_url.url_string.endswith(exp_dst_url.delim)):
563 # Making naming behavior match how things work with local Linux cp and mv
565 # container, the plurality of the source(s), and whether the mv command is
595 # should create the objects gs://bucket/f1.txt and gs://bucket/f2.txt,
596 # assuming dir1 contains f1.txt and f2.txt.
599 if (global_copy_helper_opts.perform_mv and recursion_requested
600 and src_url_names_container and not have_existing_dest_subdir):
614 elif src_url_names_container and (exp_dst_url.IsCloudUrl() or
618 # where src_url ends. For example, for src_url=gs://bucket/ and
639 if (not recursive_move_to_new_subdir and (
643 if exp_dst_url.object_name and exp_dst_url.object_name.endswith(
670 """Creates a base64 CRC32C and/or MD5 digest from file_name.
745 'WARNING: Found no hashes to validate object downloaded from %s and '
819 (isinstance(e, ResumableUploadException) and '412' in e.message))
826 needs to go or vice versa. In that case we print an error message and
827 exits. Example: if the file "./x" exists and you try to do:
835 contains an object called gs://bucket/dir and then you run the command:
837 you'll end up with objects gs://bucket/dir, gs://bucket/dir/file1, and
865 """Partitions a file into FilePart objects to be uploaded and later composed.
868 splitting the file into parts, naming and forming a destination URL for each
869 part, and also providing the PerformParallelUploadFileToObjectArgs
876 content_type: content type for the component and final objects.
893 dst_args = {} # Arguments to create commands and pass to subprocesses.
931 The file is partitioned into parts, and then the parts are uploaded in
932 parallel, composed to form the original destination object, and deleted.
947 Elapsed upload time, uploaded Object with generation, crc32c, and size
1011 # those that were uploaded by a previous, failed run and have since
1062 and not src_url.IsStream() # We can't partition streams.
1063 and dst_url.scheme == 'gs' # Compose is only for gs.
1064 and not canned_acl) # TODO: Implement canned ACL support for compose.
1070 if (all_factors_but_size and parallel_composite_upload_threshold == 0
1071 and file_size >= PARALLEL_COMPOSITE_SUGGESTION_THRESHOLD):
1079 'configuration file. However, note that if you do this you and any '
1085 and parallel_composite_upload_threshold > 0
1086 and file_size >= parallel_composite_upload_threshold)
1104 and have_existing_dst_container is a bool indicating whether
1106 In the case where we match a subdirectory AND an object, the
1153 # that covers prefixes and object names. Listing object names covers the
1154 # _$folder$ case and the nonexistent-object-as-subdir case. However, if
1156 # prefix, this listing could be paginated and span multiple HTTP calls.
1158 # listing operation after the first page of results and just query for the
1165 elif (obj_or_prefix.datatype == CloudApi.CsObjectOrPrefixType.OBJECT and
1171 # Case 4: If no objects/prefixes matched, and nonexistent objects should be
1173 return (storage_url, expansion_empty and treat_nonexistent_object_as_subdir)
1188 if (src_url.IsFileUrl() and src_url.delim == '\\'
1189 and dst_url.IsCloudUrl()):
1196 """Checks if src_url and dst_url represent the same object or file.
1207 if src_url.IsFileUrl() and dst_url.IsFileUrl():
1213 return (src_url.url_string == dst_url.url_string and
1227 if (dst_url.IsCloudUrl() and dst_obj_metadata and
1232 if src_url.IsFileUrl() and src_url.IsStream():
1246 src_obj_metadata: Metadata for source object; must include etag and size.
1285 """Detects and sets Content-Type if src_url names a local file.
1293 if (dst_obj_metadata.contentType is None and src_url.IsFileUrl()
1294 and not src_url.IsStream()):
1301 # and 'file' would partially consume them.
1313 # Parse output by removing line delimiter and splitting on last ":
1339 Elapsed upload time, uploaded Object with generation, md5, and size fields
1385 Elapsed upload time, uploaded Object with generation, md5, and size fields
1395 This function is called by the gsutil Cloud API implementation and the
1431 # gsutil_api.UploadObjectResumable, and retries within a single upload ID
1433 # will leave the tracker file in place, and cause the upload ID to be reused
1434 # the next time the user runs gsutil and attempts the same upload).
1449 # the tracker file and try again up to max retries.
1456 # be restarted if it was the object (and not the bucket) that was missing.
1472 'and retrying.' % src_url.url_string)))
1494 and closed.
1537 src_obj_filestream: Read stream of the source file to be read and closed.
1567 if gzip_exts and len(fname_parts) > 1 and fname_parts[-1] in gzip_exts:
1583 if (src_url.IsStream() and
1590 if not parallel_composite_upload and len(hash_algs):
1660 """Creates a new download file, and deletes the file that will be replaced.
1662 Names and creates a temporary file for this download. Also, if there is an
1675 need_to_unzip: If true, a temporary zip file was used and must be
1679 if dir_name and not os.path.exists(dir_name):
1690 # For gzipped objects download to a temp file and unzip. For the XML API,
1694 # (double compressed case), there is no way we can validate the hash and
1696 if (src_obj_metadata.contentEncoding and
1743 parallel_hashing = src_obj_metadata.crc32c and UsingCrcmodExtension(crcmod)
1748 and download_strategy is not CloudApi.DownloadStrategy.ONE_SHOT
1749 and max_components > 1
1750 and hashing_okay
1751 and sliced_object_download_threshold > 0
1752 and src_obj_metadata.size >= sliced_object_download_threshold)
1755 and src_obj_metadata.size >= PARALLEL_COMPOSITE_SUGGESTION_THRESHOLD
1756 and not UsingCrcmodExtension(crcmod)
1757 and check_hashes_config != CHECK_HASH_NEVER):
1813 possible and appropriate. In the case that a resumption should not be
1815 processes from attempting resumption), and a new sliced download tracker
1819 src_obj_metadata: Metadata from the source object. Must include etag and
1844 # size is exactly the same as the source size and the tracker file matches.
1848 if (tracker_file_data['etag'] == src_obj_metadata.etag and
1849 tracker_file_data['generation'] == src_obj_metadata.generation and
1904 src_obj_metadata: Metadata from the source object. Must include etag and
1918 assert (self._start_byte <= current_file_pos and
1955 of the returned components are mutually exclusive and collectively
1997 Byte ranges are decided for each thread/process, and then the parts are
2047 expect_gzip = (src_obj_metadata.contentEncoding and
2051 server_gzip = (cp_result.server_encoding and
2055 if server_gzip and not expect_gzip:
2121 resuming = (download_start_byte != start_byte) and not download_complete
2130 # and size into the download for new downloads so that we can avoid
2155 # Delete file contents and start entire object download from scratch.
2167 if is_sliced and src_obj_metadata.size >= ResumableThreshold():
2171 # TODO: With gzip encoding (which may occur on-the-fly and not be part of
2222 # This is used to pass the mediaLink and the size into the download so that
2292 consider_md5 = src_obj_metadata.md5Hash and not sliced_download
2327 server_gzip = server_encoding and server_encoding.lower().endswith('gzip')
2352 """Validates and performs necessary operations on a downloaded file.
2365 need_to_unzip: If true, a temporary zip file was used and must be
2371 algorithm is None, an up-to-date digest is not available and the
2410 # we'll need to calculate and check it after unzipping.
2452 if 'Not a gzipped file' in str(e) and hash_invalid_exception:
2505 if dir_name and not os.path.exists(dir_name):
2545 # GCS and S3 support different ACLs and disjoint principals.
2547 and src_url.scheme != dst_url.scheme):
2573 # breaks and server errors, but the tracker callback is a no-op so this
2627 and sliced object downloads.
2640 ItemExistsError: if no clobber flag is specified and the destination
2643 and the source is an unsupported type.
2656 if dst_url.IsCloudUrl() and dst_url.scheme == 'gs':
2669 # operation can succeed with just the destination bucket and object
2682 and not global_copy_helper_opts.daisy_chain):
2691 if (src_url.scheme == 's3' and
2705 if (src_url.scheme == 's3' and
2706 global_copy_helper_opts.skip_unsupported_objects and
2717 # is not s3 (and thus differs from src_url).
2743 if (dst_url.scheme == 's3' and src_obj_size > S3_MAX_UPLOAD_SIZE
2744 and src_url != 's3'):
2753 if IS_WINDOWS and src_url.IsFileUrl() and src_url.IsStream():
2759 # already exists at the destination and prevent the upload/download
2766 # be created after the first check and before the file is fully
2775 if dst_url.IsFileUrl() and os.path.exists(dst_url.object_name):
2789 # Cloud storage API gets object and bucket name from metadata.
2845 """Load and parse a manifest file.
2861 # No header and thus not a valid manifest file.
2879 """Opens the manifest file and assigns it to the file pointer."""
2948 # Remove the item from the dictionary since we're done with it and
2978 This handles cases for file system directories, bucket, and bucket
2980 and for file://dir we'll return file://
3011 The number of components in the partitioned file, and the size of each
3091 # Newlines are used as the delimiter because only newlines and carriage
3092 # returns are invalid characters in object names, and users can specify
3114 tracker_file_lock: Thread and process-safe Lock for the tracker file.
3185 Given the list of all target objects based on partitioning the file and
3188 existing components are still valid, and which existing components should
3202 uploaded and are still valid.
3205 and are in a versioned bucket, and