HomeSort by relevance Sort by last modified time
    Searched full:convolution (Results 26 - 50 of 406) sorted by null

12 3 4 5 6 7 8 91011>>

  /external/tensorflow/tensorflow/core/api_def/base_api/
api_def_FusedResizeAndPadConv2D.pbtxt 50 summary: "Performs a resize and padding as a preprocess during a convolution."
53 the packing stage of a convolution, so this op allows for an optimized
  /external/tensorflow/tensorflow/core/kernels/
conv_3d.h 16 // Functors for 3d convolution.
27 // Applies a 3D convolution to a batch of multi-channel volumes.
mkl_conv_ops.h 71 // Calculate Convolution strides
81 // Calculate Convolution input size in MKL-DNN order. MKL-DNN
127 // Calculate Convolution filter size in MKL-DNN order. MKL-DNN
132 // Calculate Convolution filter size in MKL-DNN order. MKL-DNN
136 // parameter for input - it accepts src_shape since Convolution Backward
137 // Input gets shape of input tensor rather than actual tensor (Convolution
181 // Calculate Convolution filter size in MKL-DNN order. MKL-DNN
193 // Calculate Bias size for 2D Convolution. Function does not return
205 // Function to calculate output and padding size for 2D convolution.
207 // Calculate output shape of Convolution in MKL-DNN and TensorFlow order
    [all...]
  /external/tensorflow/tensorflow/python/kernel_tests/
atrous_convolution_test.py 15 """Tests for atrous convolution functionality in tensorflow.ops.nn."""
104 y1 = nn_ops.convolution(
106 y2 = nn_ops.convolution(input=x, filter=filters_upsampled, **kwargs)
116 y = nn_ops.convolution(
123 y = nn_ops.convolution(
221 result = nn_ops.convolution(
223 result = nn_ops.convolution(
231 y1 = nn_ops.convolution(
236 y1 = nn_ops.convolution(
257 output = nn_ops.convolution(
    [all...]
conv2d_backprop_filter_grad_test.py 15 """Tests for convolution related functionality in tensorflow.ops.nn."""
43 # Make a convolution op with the current settings, just to easily get
77 # Make a convolution op with the current settings,
conv3d_backprop_filter_v2_grad_test.py 15 """Tests for convolution related functionality in tensorflow.ops.nn."""
44 # Make a convolution op with the current settings, just to easily get
  /external/tensorflow/tensorflow/compiler/xla/service/gpu/
thunk.h 92 // to complete before running. For example, a convolution thunk creates a
93 // scratch allocator, then kicks off a convolution in cudnn via the stream
96 // convolution thunk needs to return true so that future thunks wait for the
97 // convolution thunk to avoid reusing the deallocated memory until the
98 // convolution thunk is done with it.
cudnn_convolution_algorithm_picker.h 34 // memory while timing the various convolution algorithms. If it's null,
42 return "cudnn-convolution-algorithm-picker";
convolution_thunk.h 35 // convolution. It is generated by IrEmitter.
40 // Constructs a thunk for launching a DNN convolution. When run, it will
47 // thunk, but rather to the "output" of a hypothetical forward convolution
67 // Does the convolution for the thunk on "stream".
cudnn_convolution_rewriter.h 30 return "cudnn-convolution-rewriter";
ir_emission_utils.h 61 // A call to cuDNN for convolution (forward, backward filter, or backward input)
66 // regular convolution ops. They have the same LHS and RHS operands, plus two
73 // is the actual result of the convolution, and scratch_memory is temporary
91 // Returns true if `hlo` will be implemented as a call to a cuDNN convolution
117 // or cuDNN convolution.
  /external/tensorflow/tensorflow/contrib/model_pruning/python/layers/
core_layers.py 42 """Abstract nD convolution layer (private, used as implementation base).
44 This layer creates a convolution kernel that is convolved
52 rank: An integer, the rank of the convolution, e.g. "2" for 2D convolution.
54 of filters in the convolution).
56 length of the convolution window.
58 specifying the stride length of the convolution.
68 the dilation rate to use for dilated convolution.
74 kernel_initializer: An initializer for the convolution kernel.
77 kernel_regularizer: Optional regularizer for the convolution kernel
    [all...]
  /external/tensorflow/tensorflow/compiler/xla/tests/
convolution_dimension_numbers_test.cc 39 // Tests the convolution operation with invalid input dimension numbers.
49 // Tests the convolution operation with invalid weight dimension numbers.
59 // Tests the convolution operation with invalid output dimension numbers.
convolution_test.cc 16 // Tests of convolution with trivial kernels and no special variations (like
48 // XLA:GPU sometimes uses FFT convolution which isn't as precise as spatial
49 // convolution. So relax the absolute error threshold.
150 // Tests valid padding for 2D convolution in raster space.
184 // Tests same padding for 2D convolution in raster space.
219 // Tests same padding for 2D convolution in raster space with an odd sized
288 // Convolution dimensions are bf0_oi0->bo0.
324 // Convolution dimensions are bf0_oi0->bo0.
355 // Convolution dimensions are bf0_oi0->bo0.
389 // Convolution dimensions are bf0_oi0->bo0
    [all...]
  /external/webrtc/webrtc/modules/audio_coding/codecs/ilbc/
state_search.c 50 /* Scale to maximum 12 bits to avoid saturation in circular convolution filter */
54 /* Set up the filter coefficients for the circular convolution */
65 /* Run the Zero-Pole filter (Ciurcular convolution) */
  /external/tensorflow/tensorflow/compiler/xla/
metric_table_report.h 41 // 1,749,414 (44.71% ?44.72%) convolution (206 ops)
42 // * 10.51% %convolution.202
43 // * 10.51% %convolution.204
44 // * 10.51% %convolution.203
46 // 884,939 (22.62% ?67.33%) convolution window-dilated (7 ops)
47 // * 7.50% %convolution-window-dilated.7
  /external/ImageMagick/MagickCore/
morphology.h 31 GaussianKernel, /* Convolution Kernels, Gaussian Based */
37 LaplacianKernel, /* Convolution Kernels, by Name */
  /frameworks/av/media/libstagefright/codecs/amrnb/enc/src/
convolve.h 39 * Purpose : Perform the convolution between two vectors x[]
42 * : L samples of the convolution are computed.
  /external/tensorflow/tensorflow/contrib/eager/python/examples/gan/
README.md 4 The discriminator and generator networks each contain a few convolution and
  /external/tensorflow/tensorflow/core/util/
port.h 25 // half-precision matrix multiplications and convolution operations.
  /external/tensorflow/tensorflow/examples/learn/
text_classification_character_cnn.py 53 # Apply Convolution filtering on input sequence.
61 # Max pooling across output of Convolution+Relu.
67 # Transpose matrix so that n_filters from convolution becomes width.
70 # Second level of convolution filtering.
text_classification_cnn.py 52 # Apply Convolution filtering on input sequence.
60 # Max pooling across output of Convolution+Relu.
66 # Transpose matrix so that n_filters from convolution becomes width.
69 # Second level of convolution filtering.
  /external/tensorflow/tensorflow/contrib/slim/python/slim/nets/
resnet_utils.py 88 """Strided 2-D convolution with 'SAME' padding.
116 rate: An integer, rate for atrous convolution.
121 the convolution output.
170 Control of the output feature density is implemented by atrous convolution.
192 # activations. This allows us to invoke atrous convolution whenever applying
197 # The atrous convolution rate parameter.
208 # atrous convolution with stride=1 and multiply the atrous rate by the
  /external/tensorflow/tensorflow/python/keras/_impl/keras/layers/
convolutional.py 15 """Keras convolution layers and image transformation layers.
46 """1D convolution layer (e.g. temporal convolution).
48 This layer creates a convolution kernel that is convolved
63 (i.e. the number of output filters in the convolution).
65 specifying the length of the 1D convolution window.
67 specifying the stride length of the convolution.
77 the dilation rate to use for dilated convolution.
160 """2D convolution layer (e.g. spatial convolution over images)
    [all...]
  /external/tensorflow/tensorflow/compiler/tf2xla/kernels/
conv_ops.cc 16 // XLA-specific Ops for 2D convolution.
37 // Returns the expanded size of a filter used for depthwise convolution.
59 // Create a mask for depthwise convolution that will make a normal convolution
60 // produce the same results as a depthwise convolution. For a [2, 2, 3, 2]
128 // zeros for the cross-depth filters. Used to build a depthwise convolution.
227 // For 2D convolution, there should be 4 dimensions.
394 // The input gradients are computed by a convolution of the output
427 // If this is a depthwise convolution, expand the filter.
549 // The filter gradients are computed by a convolution of the inpu
    [all...]

Completed in 609 milliseconds

12 3 4 5 6 7 8 91011>>