HomeSort by relevance Sort by last modified time
    Searched full:quantized (Results 176 - 200 of 342) sorted by null

1 2 3 4 5 6 78 91011>>

  /external/opencv3/modules/cudalegacy/include/opencv2/
cudalegacy.hpp 156 int Lc; //!< Quantized levels per 'color' component. Power of two, typically 32, 64 or 128.
161 int Lcc; //!< Quantized levels per 'color co-occurrence' component. Power of two, typically 16, 32 or 64.
  /frameworks/av/media/libstagefright/codecs/amrnb/common/include/
qua_gain.h 119 Word16 *qua_ener_MR122, /* o : quantized energy error, Q10 */
121 Word16 *qua_ener, /* o : quantized energy error, Q10 */
  /frameworks/av/media/libstagefright/codecs/amrnb/enc/src/
qgain795.h 125 Word16 *qua_ener_MR122, /* o : quantized energy error, Q10 */
127 Word16 *qua_ener, /* o : quantized energy error, Q10 */
spstproc.cpp 106 Aq -- Pointer to Word16 -- A(z) quantized for the 4 subframes
178 Word16 *Aq, /* i : A(z) quantized for the 4 subframes */
219 * - Update pitch sharpening "sharp" with quantized gain_pit *
cod_amr.cpp 823 Word16 Aq_t[(MP1) * 4]; // A(z) quantized for the 4 subframes
850 Word16 gain_pit_sf0; // Quantized pitch gain for sf0
851 Word16 gain_code_sf0; // Quantized codebook gain for sf0
898 * subframes (both quantized and unquantized) *
1025 Aq = Aq_t; // pointer to interpolated quantized LPC parameters
    [all...]
  /frameworks/av/media/libstagefright/codecs/amrwb/src/
qisf_ns.cpp 48 int16 isf[] : quantized ISFs (in frequency domain)
55 The ISF vector is quantized using VQ with split-by-5
dec_gain2_amr_wb.cpp 255 /* update table of past quantized energies */
325 /* Read the quantized gains */
397 /* update table of past quantized energies */
  /frameworks/rs/java/tests/Refocus/src/com/android/rs/test/
BlurStack.java 131 * quantized depth.
229 * quantized depth.
  /external/gemmlowp/test/
correctness_meta_gemm.cc 212 std::cout << "Quantized 8 bit." << std::endl << std::flush;
test.cc 707 // Runs a small set of hand-picked data for per-channel quantized data.
785 // Runs a larger set of hand-picked data for per-channel quantized data.
1172 std::int32_t quantized = result_quantized_down_int32(r, c); local
    [all...]
  /external/libmpeg2/decoder/
impeg2d_vld.c     [all...]
  /external/opencv3/doc/py_tutorials/py_ml/py_svm/py_svm_opencv/
py_svm_opencv.markdown 37 gradient is quantized to 16 integer values. Divide this image to four sub-squares. For each
  /external/webrtc/webrtc/modules/audio_coding/codecs/isac/fix/source/
pitch_gain_tables.c 23 /* cdf for quantized pitch filter gains */
  /external/webrtc/webrtc/voice_engine/test/auto_test/standard/
volume_test.cc 16 // The hardware volume may be more coarsely quantized than [0, 255], so
  /external/ImageMagick/ImageMagick/script/
quantize.html 79 <p>Classification builds a color description tree for the image. Reduction collapses the tree until the number it represents, at most, is the number of colors desired in the output image. Assignment defines the output image's color map and sets each pixel's color by reclassification in the reduced tree. <var>Our goal is to minimize the numerical discrepancies between the original colors and quantized colors</var>. To learn more about quantization error, see <a href="quantize.php#measure">Measuring Color Reduction Error</a>.</p>
81 <p>Classification begins by initializing a color description tree of sufficient depth to represent each possible input color in a leaf. However, it is impractical to generate a fully-formed color description tree in the classification phase for realistic values of <var>Cmax</var>. If color components in the input image are quantized to <var>k</var>-bit precision, so that <var>Cmax</var> = <var>2^k-1</var>, the tree would need <var>k</var> levels below the root node to allow representing each possible input color in a leaf. This becomes prohibitive because the tree's total number of nodes:</p>
160 <p>The normalized error measurement can be used to compare images. In general, the closer the mean error is to zero the more the quantized image resembles the source image. Ideally, the error should be perceptually-based, since the human eye is the final judge of quantization quality.</p>
  /external/ImageMagick/www/
quantize.html 83 <p>Classification builds a color description tree for the image. Reduction collapses the tree until the number it represents, at most, is the number of colors desired in the output image. Assignment defines the output image's color map and sets each pixel's color by reclassification in the reduced tree. <var>Our goal is to minimize the numerical discrepancies between the original colors and quantized colors</var>. To learn more about quantization error, see <a href="quantize.html#measure">Measuring Color Reduction Error</a>.</p>
85 <p>Classification begins by initializing a color description tree of sufficient depth to represent each possible input color in a leaf. However, it is impractical to generate a fully-formed color description tree in the classification phase for realistic values of <var>Cmax</var>. If color components in the input image are quantized to <var>k</var>-bit precision, so that <var>Cmax</var> = <var>2^k-1</var>, the tree would need <var>k</var> levels below the root node to allow representing each possible input color in a leaf. This becomes prohibitive because the tree's total number of nodes:</p>
164 <p>The normalized error measurement can be used to compare images. In general, the closer the mean error is to zero the more the quantized image resembles the source image. Ideally, the error should be perceptually-based, since the human eye is the final judge of quantization quality.</p>
  /external/libvpx/libvpx/vp9/encoder/
vp9_rd.c 324 // when quantized with a uniform quantizer with given stepsize. The
346 // with given variance when quantized with a uniform quantizer
395 // source with given variance when quantized with a uniform quantizer
  /frameworks/av/media/libstagefright/codecs/amrnb/common/src/
lsp.cpp 461 * Find interpolated LPC parameters in all subframes (both quantized *
464 * and the quantized interpolated parameters are in array Aq_t[] *
470 /* LSP quantization (lsp_mid[] and lsp_new[] jointly quantized) */
491 * Find interpolated LPC parameters in all subframes (both quantized *
494 * and the quantized interpolated parameters are in array Aq_t[] *
  /frameworks/av/media/libstagefright/codecs/on2/h264dec/omxdl/arm11/vc/api/
omxVC.h     [all...]
  /frameworks/av/media/libstagefright/codecs/on2/h264dec/omxdl/arm_neon/vc/api/
omxVC.h     [all...]
  /frameworks/av/media/libstagefright/codecs/on2/h264dec/omxdl/reference/vc/api/
omxVC.h     [all...]
  /external/aac/libAACdec/src/
aacdec_hcr.cpp 468 bitstream according to the HCR algorithm and stores the quantized spectral
552 description: This function reorders the quantized spectral coefficients sectionwise for
586 /* long and short: check if decoded huffman-values (quantized spectral coefficients) are within range */
    [all...]
  /external/flac/include/FLAC/
format.h 137 /** The minimum quantized linear predictor coefficient precision
142 /** The maximum quantized linear predictor coefficient precision
323 /**< Quantized FIR filter coefficient precision in bits. */
    [all...]
  /external/libavc/encoder/
ime_distortion_metrics.c 894 * Threshold for each element of transofrmed quantized block
1014 * Threshold for each element of transofrmed quantized block
1152 * Threshold for each element of transofrmed quantized block
    [all...]
  /cts/apps/CtsVerifier/src/com/android/cts/verifier/sensors/
MotionIndicatorView.java 340 * If it is LINEAR mode, the range will be quantized to nearest step boundary. If it is the

Completed in 718 milliseconds

1 2 3 4 5 6 78 91011>>