HomeSort by relevance Sort by last modified time
    Searched full:weights (Results 726 - 750 of 1517) sorted by null

<<21222324252627282930>>

  /external/tensorflow/tensorflow/contrib/kfac/examples/
convnet.py 85 # layer.weights is a list. This converts it a (hashable) tuple.
124 params: Tuple of (weights, bias), parameters for this layer.
147 # preactivations, activations, weights, and bias.
226 """Returns True if this task should update the weights."""
247 """Number of tasks that will update weights."""
258 60% of tasks update weights; the next 20% accumulate covariance statistics;
  /external/tensorflow/tensorflow/contrib/linear_optimizer/python/
sdca_estimator.py 98 None if there are no weights.
100 model weights.
178 """SessionRunHook to update and shrink SDCA model weights."""
226 weights. It is used to down weight or boost examples during training. It
349 weights. It is used to downweight or boost examples during training. It
477 weights. It is used to down weight or boost examples during training. It
  /external/tensorflow/tensorflow/docs_src/mobile/
prepare_models.md 33 trained and weights are updated, so it?s a time-critical operation, and it may
51 you want to have a complete graph you can run, including the weights, you?ll
60 the `NodeDef`, so if all the `Variable` weights are converted to `Const` nodes,
62 the weights. Freezing the graph handles the process of loading the
88 CPU version of the same graph, but keep the same weights for both. You might
156 computation that?s needed for back-propagation and updates of weights, as well
  /external/tensorflow/tensorflow/docs_src/tutorials/
image_retraining.md 6 retrains from the existing weights for new classes. In this example we'll be
121 then compared against the actual labels to update the final layer's weights
134 The script includes TensorBoard summaries that make it easier to understand, debug, and optimize the retraining. For example, you can visualize the graph and statistics, such as how the weights or accuracy varied during training.
296 with the results used to update the model's weights. You might wonder why we
366 its weights quantized down to eight bits on disk. You can choose '1.0', '0.75',
371 32-bit float weights.
wide.md 202 (also known as model weights) for each feature ID. The model parameters will be
391 regularization tends to make model weights stay at zero, creating sparser
392 models, whereas L2 regularization also tries to make the model weights closer to
395 weights will be zero. This is often desirable when the feature space is very
415 where \\(\mathbf{w}=[w_1, w_2, ..., w_d]\\) are the model weights for the
439 weights (i.e. model parameters) to minimize a **loss function** defined over the
  /external/tensorflow/tensorflow/python/keras/_impl/keras/layers/
convolutional_recurrent.py 231 kernel_initializer: Initializer for the `kernel` weights matrix,
234 weights matrix,
243 the `kernel` weights matrix.
245 the `recurrent_kernel` weights matrix.
250 the `kernel` weights matrix.
252 the `recurrent_kernel` weights matrix.
  /external/webrtc/webrtc/common_audio/vad/
vad_core.c 36 // Weights for the two Gaussians for the six channels (noise)
39 // Weights for the two Gaussians for the six channels (speech)
98 // - weights [i] : Weights used for averaging.
102 const int16_t* weights) {
108 weighted_average += data[k * kNumChannels] * weights[k * kNumChannels];
  /external/tensorflow/tensorflow/python/layers/
convolutional.py 75 norm constraints or value constraints for layer weights). The function
266 norm constraints or value constraints for layer weights). The function
375 norm constraints or value constraints for layer weights). The function
384 reuse: Boolean, whether to reuse the weights of a previous layer
464 norm constraints or value constraints for layer weights). The function
580 norm constraints or value constraints for layer weights). The function
589 reuse: Boolean, whether to reuse the weights of a previous layer
670 norm constraints or value constraints for layer weights). The function
    [all...]
network.py 205 weights = network.trainable_weights
271 # A GraphNetwork does not create weights of its own, thus it is already
274 # A GraphNetwork does not create weights of its own, thus has no dtype.
713 weights = []
715 weights += layer.trainable_weights
716 return weights
720 weights = []
722 weights += layer.non_trainable_weights
727 return trainable_weights + weights
728 return weights
    [all...]
core_test.py 261 weights = _get_variable_dict_from_varstore()
262 self.assertEqual(len(weights), 2)
263 # Check that the matrix weights got initialized to ones (from scope).
264 self.assertAllClose(weights['scope/dense/kernel'].read_value().eval(),
267 self.assertAllClose(weights['scope/dense/bias'].read_value().eval(),
network_test.py 267 self.assertEqual(network.weights, dense.weights)
283 self.assertEqual(network.weights, dense.weights)
512 self.assertEqual(len(recursive_net.weights), 2)
  /external/deqp/modules/gles3/functional/
es3fDrawTests.cpp 891 float weights[SIZE]; member in struct:deqp::gles3::Functional::__anon18434::UniformWeightArray
896 weights[i] = 1.0f;
    [all...]
  /external/tensorflow/tensorflow/python/feature_column/
feature_column_test.py     [all...]
  /external/tensorflow/tensorflow/python/estimator/canned/
dnn_testing_utils.py 93 """Create checkpoint file with provided model weights.
100 weights, biases = zip(*weights_and_biases)
103 # Hidden layer weights.
104 for i in range(0, len(weights) - 1):
105 model_weights[HIDDEN_WEIGHTS_NAME_PATTERN % i] = weights[i]
108 # Output layer weights.
109 model_weights[LOGITS_WEIGHTS_NAME] = weights[-1]
496 # explicitly set variable weights through a checkpoint.
677 # explicitly set variable weights through a checkpoint.
    [all...]
  /external/tensorflow/tensorflow/python/framework/
function_test.py     [all...]
  /external/dng_sdk/source/
dng_mosaic_info.cpp 89 // weights.
201 // Calculate 16-bit weights.
209 // Round weights to 8 fractional bits.
213 // Keep track of total of weights.
228 // Adjust largest entry so total of weights is exactly 256.
232 // Recompute the floating point weights from the rounded integer weights
    [all...]
  /external/llvm/lib/Transforms/Instrumentation/
PGOInstrumentation.cpp 31 // annotates the branch weights. It also reads the indirect call value
470 // Set the branch weights based on the count values.
712 DEBUG(dbgs() << "\nSetting branch weights.\n");
742 SmallVector<unsigned, 4> Weights;
744 Weights.push_back(scaleBranchCount(ECI, Scale));
747 MDB.createBranchWeights(Weights));
749 for (const auto &W : Weights) { dbgs() << W << " "; }
    [all...]
  /external/tensorflow/tensorflow/contrib/slim/python/slim/
learning.py 51 (a) computes the loss, (b) applies the gradients to update the weights and
65 'conv0/weights': 1.2,
66 'fc8/weights': 3.4,
184 Rather than initializing all of the weights of a given model, we sometimes
185 only want to restore some of the weights from a checkpoint. To do this, one
216 One may want to initialize the weights of a model from values from an arbitrary
218 using plain TensorFlow, it also results in the values of your weights being
  /external/tensorflow/tensorflow/contrib/framework/python/ops/
variables_test.py     [all...]
  /external/tensorflow/tensorflow/contrib/cudnn_rnn/python/layers/
cudnn_rnn.py 62 multi-layer multi-directional RNN; Whereas tf RNN weights are per-cell and
148 # Number of cell weights(or biases) per layer.
341 weights = [
349 opaque_params_t = self._canonical_to_opaque(weights, biases)
460 weights=cu_weights,
  /external/tensorflow/tensorflow/contrib/distributions/python/ops/bijectors/
masked_autoregressive.py 60 this property by zeroing out weights in its `masked_dense` layers.
65 masked weights such that the autoregressive property is automatically met in
86 `log(scale)` using `masked_dense` layers in a deep neural network. Weights are
347 If `None` (default), weights are initialized using the
349 reuse: Python `bool` scalar representing whether to reuse the weights of a
  /external/tensorflow/tensorflow/contrib/factorization/python/ops/
gmm_ops.py 113 process. Can contain any combination of "w" for weights, "m" for
135 # Membership weights w_{ik} where "i" is the i-th example and "k"
162 # Mixture weights, representing the probability that a randomly
359 # Membership weights are computed as:
479 process. Can contain any combination of "w" for weights, "m" for
wals.py 242 * sum_weights: A float Tensor, the sum of factor weights.
431 (vector). The weights to use in the projection.
495 shards and the elements in each inner list are the weights for the
498 - A non-negative scalar: This value is used for all row weights.
503 weights will be cached on the workers before the updates start, during
  /external/tensorflow/tensorflow/contrib/tensor_forest/kernels/
tree_utils.cc 232 // Populate *weights with the smoothed per-class frequencies needed to
237 std::vector<float>* weights) {
248 weights->resize(num_classes * 2);
253 (*weights)[i] = (left_count + 1.0) / denom;
255 (*weights)[num_classes + i] = (right_count + 1.0) / denom;
  /external/tensorflow/tensorflow/python/estimator/
warm_starting_util.py 114 Warm-start all weights in the model (input layer and hidden weights).
130 Warm-start all weights but the embedding parameters corresponding to
166 Warm-start all weights but the parameters corresponding to `sc_vocab_file`
185 Warm-start all weights but the parameters corresponding to `sc_vocab_file`

Completed in 817 milliseconds

<<21222324252627282930>>