Home | History | Annotate | Download | only in estimator

Lines Matching refs:devices

54                        devices=None):
66 If `devices` are `None`, then all available GPUs are going to be used for
138 devices: Optional list of devices to replicate the model across. This
155 devices,
170 """Variables are placed on a single device and shared across all devices.
180 """Variables are placed on all devices in a round-robin fashion.
183 copy of each variable that is shared across all devices.
190 devices=None,
196 if not devices:
197 devices = _get_local_devices('GPU') or _get_local_devices('CPU')
199 is_a_single_gpu_case = len(devices) == 1 and 'GPU' in devices[0].upper()
200 consolidation_device = devices[0] if is_a_single_gpu_case else '/CPU:0'
204 ps_devices = devices
208 .format(devices, ps_devices, consolidation_device))
220 devices=devices,
226 features, labels, len(devices), device=consolidation_device)
235 devices=devices,
247 if len(devices) == 1:
510 devices,
514 """Replicate the loss computation across devices."""
528 loss_reduction, len(devices))
530 for i, device in enumerate(devices):
561 if (tower_spec.train_op is not None and len(devices) > 1 and
565 ' multiple `devices`.')
570 tower_spec, loss_reduction, number_of_towers=len(devices))