Searched
full:technique (Results
476 -
500 of
811) sorted by null
<<11121314151617181920>>
/external/pcre/dist2/doc/html/ |
pcre2partial.html | 441 Of course, instead of using PCRE2_DFA_RESTART, the same technique of re-running
|
/external/pcre/dist2/doc/ |
pcre2partial.3 | 410 Of course, instead of using PCRE2_DFA_RESTART, the same technique of re-running
|
/external/python/cpython2/Doc/howto/ |
sockets.rst | 129 around shared memory and locks or semaphores is by far the fastest technique.
|
/external/python/cpython2/Doc/tutorial/ |
inputoutput.rst | 407 This simple serialization technique can handle lists and dictionaries, but
|
stdlib2.rst | 166 Threading is a technique for decoupling tasks which are not sequentially
|
/external/python/cpython2/Lib/ |
asyncore.py | 32 most popular way to do it, but there is another very different technique,
|
/external/python/cpython2/Parser/ |
pgen.c | 696 or similar compiler books (this technique is more often used for lexical
|
/external/python/cpython3/Doc/library/ |
asyncore.rst | 30 popular way to do it, but there is another very different technique, that lets
|
functools.rst | 106 technique::
|
/external/python/cpython3/Doc/tutorial/ |
inputoutput.rst | 429 This simple serialization technique can handle lists and dictionaries, but
|
stdlib2.rst | 168 Threading is a technique for decoupling tasks which are not sequentially
|
/external/python/cpython3/Lib/ |
asyncore.py | 32 most popular way to do it, but there is another very different technique,
|
/external/python/cpython3/Parser/ |
pgen.c | 710 or similar compiler books (this technique is more often used for lexical
|
/external/skia/src/core/ |
SkBlurImageFilter.cpp | 288 // NB the sums in the blur code use the following technique to avoid
|
/external/skqp/src/core/ |
SkBlurImageFilter.cpp | 288 // NB the sums in the blur code use the following technique to avoid
|
/external/swiftshader/third_party/subzero/docs/ |
DESIGN.rst | [all...] |
/external/syslinux/bios/txt/html/ |
syslinux-cli.html | 582 // nodeIterator API would be a better technique but not supported by all
|
/external/syslinux/efi32/txt/html/ |
syslinux-cli.html | 582 // nodeIterator API would be a better technique but not supported by all
|
/external/syslinux/efi64/txt/html/ |
syslinux-cli.html | 582 // nodeIterator API would be a better technique but not supported by all
|
/external/syslinux/gpxe/src/arch/i386/prefix/ |
libprefix.S | 153 /* Print digit (technique by Norbert Juffa <norbert.juffa@amd.com> */
|
/external/tensorflow/tensorflow/contrib/lite/ |
README.md | 125 The above pre-trained models have been trained on the ImageNet data set, which consists of 1000 predefined classes. A model will need to be re-trained if these classes are not relevant or useful for a given use case. This technique is called transfer learning, which starts with a model that has been already trained on a problem and will then be retrained on a similar problem. Deep learning from scratch can take days, but transfer learning can be done fairly quickly. In order to do this, a developer will need to generate their custom data set labeled with the relevant classes.
|
/external/tensorflow/tensorflow/docs_src/tutorials/ |
image_retraining.md | 4 to fully train. Transfer learning is a technique that shortcuts a lot of this
|
wide.md | 370 Regularization is a technique used to avoid **overfitting**. Overfitting happens
|
word2vec.md | 225 [t-SNE dimensionality reduction technique](https://lvdmaaten.github.io/tsne/).
|
/external/tensorflow/tensorflow/python/ops/distributions/ |
special_math.py | 247 - For `lower_segment < x <= upper_segment`, use the existing `ndtr` technique
|
Completed in 1228 milliseconds
<<11121314151617181920>>