/external/tensorflow/tensorflow/contrib/boosted_trees/lib/learner/batch/ |
base_split_handler.py | 48 min_node_weight: Minimum sum of weights of examples in each partition to 72 hessians, empty_gradients, empty_hessians, weights, 86 weights: A dense float32 tensor with a weight for each example. 97 hessians, empty_gradients, empty_hessians, weights, 111 weights: A dense float32 tensor with a weight for each example. 123 empty_gradients, empty_hessians, weights, is_active,
|
/external/tensorflow/tensorflow/python/kernel_tests/ |
bincount_op_test.py | 72 weights = np.random.randint(-100, 100, num_samples) 74 weights = np.random.random(num_samples) 76 math_ops.bincount(arr, weights).eval(), np.bincount(arr, weights)) 84 weights = np.ones(num_samples).astype(dtype) 86 math_ops.bincount(arr, None).eval(), np.bincount(arr, weights))
|
/external/tensorflow/tensorflow/tools/docker/notebooks/ |
1_hello_tensorflow.ipynb | 514 "Weights:\n", 531 " weights = tf.constant(np.random.randn(4, 2).astype(np.float32))\n", 532 " output = tf.matmul(input_features, weights)\n", 535 " print(\"Weights:\")\n", 536 " print(weights.eval())\n", 550 "You might try modifying this example. Running the cell multiple times will generate new random weights and a new output. Or, change the input, e.g., to \\[0 0 0 1]), and run the cell again. Or, try initializing the weights using the TensorFlow op, e.g., `random_normal`, instead of using numpy to generate the random weights.\n", 552 "What we have here is the basics of a simple neural network already. If we are reading in the input features, along with some expected output, and change the weights based on the error with the output each time, that's a neural network." 564 "Let's look at adding two small matrices in a loop, not by creating new tensors every time, but by updating the existing values and then re-running the computation graph on the new data. This happens a lot with machine learning models, where we change some parameters each time such as gradient descent on some weights and then perform the same computations over and over again. [all...] |
/external/icu/icu4c/source/i18n/ |
collationbuilder.h | 178 * secondary weights 06..45 which are otherwise reserved for compressed sort keys. 312 * Indexes of nodes with root primary weights, sorted by primary. 317 * It also allows storing root primary weights in list head nodes, 318 * without previous index, leaving room in root primary nodes for 32-bit primary weights. 322 * Data structure for assigning tailored weights and CEs. 329 * Root primary nodes have 32-bit weights but do not have previous indexes. 330 * All other nodes have at most 16-bit weights and do have previous indexes. 332 * Nodes with explicit weights store root collator weights, 333 * or default weak weights (e.g., secondary 05) for stronger nodes [all...] |
/external/tensorflow/tensorflow/python/debug/cli/ |
evaluator_test.py | 40 evaluator._parse_debug_tensor_name("hidden_0/Weights:0")) 42 self.assertEqual("hidden_0/Weights", node_name) 58 "hidden_0/Weights:0:DebugNumericSummary")) 60 self.assertEqual("hidden_0/Weights", node_name) 77 "/job:worker/replica:0/task:3/gpu:0:hidden_0/Weights:0")) 79 self.assertEqual("hidden_0/Weights", node_name) 97 "hidden_0/Weights:0:DebugNumericSummary")) 99 self.assertEqual("hidden_0/Weights", node_name) 137 evaluator._parse_debug_tensor_name("hidden_0/Weights:0[3]")) 139 self.assertEqual("hidden_0/Weights", node_name [all...] |
/external/tensorflow/tensorflow/python/ops/ |
nn_test.py | 480 weights: Embedding weights to use as test input. It is a numpy array 493 weights = np.random.randn(num_classes, dim).astype(np.float32) 501 sampled_w, sampled_b = weights[sampled], biases[sampled] 502 true_w, true_b = weights[labels], biases[labels] 520 return weights, biases, hidden_acts, sampled_vals, exp_logits, exp_labels 522 def _ShardTestEmbeddings(self, weights, biases, num_shards): 523 """Shards the weights and biases returned by _GenerateTestData. 526 weights: The weights returned by _GenerateTestData [all...] |
/external/icu/android_icu4j/src/main/java/android/icu/impl/coll/ |
CollationWeights.java | 22 * Allocates n collation element weights between two exclusive limits. 47 // We use only the lower 16 bits for secondary weights. 60 // We use only the lower 16 bits for tertiary weights. 67 // The other bits are used for case & quaternary weights. 76 * what ranges to use for a given number of weights between (excluding) 80 * weights greater than this one. 82 * weights less than this one. 83 * @param n The number of collation element weights w necessary such that 89 // which ranges to use for a given number of weights between (excluding) 106 // printf("error: the maximum number of %ld weights is insufficient for n=%ld\n" [all...] |
/external/icu/icu4j/main/classes/collate/src/com/ibm/icu/impl/coll/ |
CollationWeights.java | 21 * Allocates n collation element weights between two exclusive limits. 45 // We use only the lower 16 bits for secondary weights. 58 // We use only the lower 16 bits for tertiary weights. 65 // The other bits are used for case & quaternary weights. 74 * what ranges to use for a given number of weights between (excluding) 78 * weights greater than this one. 80 * weights less than this one. 81 * @param n The number of collation element weights w necessary such that 87 // which ranges to use for a given number of weights between (excluding) 104 // printf("error: the maximum number of %ld weights is insufficient for n=%ld\n" [all...] |
/external/tensorflow/tensorflow/contrib/kfac/python/kernel_tests/ |
optimizer_test.py | 55 weights = variable_scope.get_variable( 59 output = math_ops.matmul(inputs, weights) + bias 61 layer_collection.register_fully_connected((weights, bias), inputs, output) 167 weights = variable_scope.get_variable( 171 output = math_ops.matmul(inputs, weights) + bias 173 layer_collection.register_fully_connected((weights, bias), inputs, output) 189 grads_and_vars = opt.compute_gradients(output, [weights, bias])
|
/prebuilts/clang/host/linux-x86/clang-3859424/prebuilt_include/llvm/lib/Fuzzer/ |
FuzzerCorpus.h | 184 // Must be called whenever the corpus or unit weights are changed. 188 Weights.resize(N); 192 Weights[i] = Inputs[i]->NumFeatures * (i + 1); 194 std::iota(Weights.begin(), Weights.end(), 1); 196 Intervals.begin(), Intervals.end(), Weights.begin()); 201 std::vector<double> Weights;
|
/prebuilts/clang/host/linux-x86/clang-4053586/prebuilt_include/llvm/lib/Fuzzer/ |
FuzzerCorpus.h | 191 // Must be called whenever the corpus or unit weights are changed. 195 Weights.resize(N); 199 Weights[i] = Inputs[i]->NumFeatures * (i + 1); 201 std::iota(Weights.begin(), Weights.end(), 1); 203 Intervals.begin(), Intervals.end(), Weights.begin()); 208 std::vector<double> Weights;
|
/prebuilts/clang/host/linux-x86/clang-4393122/prebuilt_include/llvm/lib/Fuzzer/ |
FuzzerCorpus.h | 191 // Must be called whenever the corpus or unit weights are changed. 195 Weights.resize(N); 199 Weights[i] = Inputs[i]->NumFeatures * (i + 1); 201 std::iota(Weights.begin(), Weights.end(), 1); 203 Intervals.begin(), Intervals.end(), Weights.begin()); 208 std::vector<double> Weights;
|
/prebuilts/clang/host/linux-x86/clang-4479392/prebuilt_include/llvm/lib/Fuzzer/ |
FuzzerCorpus.h | 191 // Must be called whenever the corpus or unit weights are changed. 195 Weights.resize(N); 199 Weights[i] = Inputs[i]->NumFeatures * (i + 1); 201 std::iota(Weights.begin(), Weights.end(), 1); 203 Intervals.begin(), Intervals.end(), Weights.begin()); 208 std::vector<double> Weights;
|
/external/llvm/lib/Analysis/ |
BranchProbabilityInfo.cpp | 38 // Weights are for internal use only. They are used by heuristics to help to 41 // Using "Loop Branch Heuristics" we predict weights of edges for the 111 /// \brief Calculate edge weights for successors lead to unreachable. 153 // Return false here so that edge weights for InvokeInst could be decided 196 // Ensure there are weights for all of the successors. Note that the first 201 // Build up the final weights that will be used in a temporary buffer. 202 // Compute the sum of all weights to later decide whether they need to 205 SmallVector<uint32_t, 2> Weights; 206 Weights.reserve(TI->getNumSuccessors()); 214 Weights.push_back(Weight->getZExtValue()) [all...] |
/external/tensorflow/tensorflow/contrib/learn/python/learn/estimators/ |
head_test.py | 197 w = ("regression_head/logits/weights:0", 251 weights = 2. 254 features={"label_weight": weights}, 262 _assert_metrics(self, (weights * 5.) / len(labels), { 263 "loss": (weights * 5.) / (weights * len(labels)) 269 weights = (2., 5., 0.) 272 features={"label_weight": weights}, 280 _assert_metrics(self, 2. / len(labels), {"loss": 2. / np.sum(weights)}, 286 weights = ((2.,), (5.,), (0.,) [all...] |
/external/tensorflow/tensorflow/contrib/gan/python/losses/python/ |
losses_impl.py | 77 weights=1.0, 89 weights: Optional `Tensor` whose rank is either 0, or the same rank as 102 discriminator_gen_outputs, weights)) as scope: 107 loss, weights, scope, loss_collection, reduction) 227 weights=generated_weights, scope=scope, loss_collection=None, 231 weights=real_weights, label_smoothing=label_smoothing, scope=scope, 247 weights=1.0, 267 weights: Optional `Tensor` whose rank is either 0, or the same rank as 288 weights=weights, scope=scope, loss_collection=loss_collection [all...] |
/external/freetype/src/type1/ |
t1load.c | 256 /* Given a vector of weights, one for each design, figure out the */ 257 /* normalized axis coordinates which gave rise to those weights. */ 260 mm_weights_unmap( FT_Fixed* weights, 267 axiscoords[0] = weights[1]; 271 axiscoords[0] = weights[3] + weights[1]; 272 axiscoords[1] = weights[3] + weights[2]; 277 axiscoords[0] = weights[7] + weights[5] + weights[3] + weights[1] [all...] |
/external/tensorflow/tensorflow/contrib/factorization/examples/ |
mnist.py | 155 weights = tf.Variable( 158 name='weights') 161 hidden1 = tf.nn.relu(tf.matmul(all_scores, weights) + biases) 164 weights = tf.Variable( 167 name='weights') 170 hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases) 173 weights = tf.Variable( 176 name='weights') 179 logits = tf.matmul(hidden2, weights) + biases
|
/external/tensorflow/tensorflow/contrib/lite/toco/graph_transformations/ |
fuse_binary_into_following_affine.cc | 45 const auto& weights = model->GetArray(following_op->inputs[1]); local 70 const Shape& weights_shape = weights.shape(); 72 const auto& weights_buffer = weights.GetBuffer<ArrayDataType::kFloat>(); 125 auto& weights = model->GetArray(weights_name); local 138 weights.GetMutableBuffer<ArrayDataType::kFloat>().data.data(); 139 const int weights_size = RequiredBufferSizeForShape(weights.shape()); 244 const auto& weights = model->GetArray(following_op->inputs[1]); local 246 if (!weights.buffer || !bias.buffer) { 248 "Not fusing %s because the following %s has non-constant weights or "
|
/external/tensorflow/tensorflow/docs_src/api_guides/python/ |
meta_graph.md | 174 weights = tf.Variable( 177 name="weights") 180 hidden1 = tf.nn.relu(tf.matmul(images, weights) + biases) 183 weights = tf.Variable( 186 name="weights") 189 hidden2 = tf.nn.relu(tf.matmul(hidden1, weights) + biases) 192 weights = tf.Variable( 195 name="weights") 198 logits = tf.matmul(hidden2, weights) + biases
|
/external/tensorflow/tensorflow/tools/graph_transforms/ |
fold_old_batch_norms.cc | 118 Tensor weights = GetNodeTensorAttr(weights_node, "value"); local 119 const int64 weights_cols = weights.shape().dim_size(3); 122 // Multiply the original weights by the scale vector. 123 auto weights_matrix = weights.flat_inner_dims<float>(); 124 Tensor scaled_weights(DT_FLOAT, weights.shape()); 148 // name of the scaled weights constant is the same as the original. 177 // Fuse conv weights, and set the final output node name as batch_norm_node. 223 // Fuse the weights for input0 of conv2d. 229 // Fuse the weights for input1 of conv2d.
|
/development/samples/ApiDemos/src/com/example/android/apis/graphics/ |
SensorTest.java | 44 public RunAve(float[] weights) { 45 mWeights = weights; 48 for (int i = 0; i < weights.length; i++) { 49 sum += weights[i]; 53 mDepth = weights.length;
|
/external/llvm/docs/ |
BranchWeightMetadata.rst | 11 Branch Weight Metadata represents branch weights as its likeliness to be taken 17 Branch weights might be fetch from the profiling file, or generated based on 20 All weights are represented as an unsigned 32-bit values, where higher value 43 Branch weights are assigned to every case (including the ``default`` case which 57 Branch weights are assigned to every destination.
|
/external/swiftshader/third_party/LLVM/docs/ |
BranchWeightMetadata.html | 29 <p>Branch Weight Metadata represents branch weights as its likeliness to 35 <p>Branch weights might be fetch from the profiling file, or generated based on 39 <p>All weights are represented as an unsigned 32-bit values, where higher value 65 <p>Branch weights are assign to every case (including <tt>default</tt> case 80 <p>Branch weights are assign to every destination.</p>
|
/external/tensorflow/tensorflow/contrib/boosted_trees/lib/utils/ |
dropout_utils.cc | 38 const std::vector<float>& weights, std::vector<int32>* dropped_trees, 45 return errors::InvalidArgument("Original weights is nullptr."); 59 const auto num_trees = weights.size(); 102 original_weights->push_back(weights[dropped_tree]); 128 // We have the entries in weights and updates for this tree already
|