HomeSort by relevance Sort by last modified time
    Searched refs:optimization (Results 1 - 25 of 70) sorted by null

1 2 3

  /external/protobuf/src/google/protobuf/compiler/javamicro/
javamicro_params.h 116 void set_optimization(eOptimization optimization) {
117 optimization_ = optimization;
119 eOptimization optimization() const { function in class:google::protobuf::compiler::javamicro::Params
  /external/ceres-solver/docs/
nnlsq.tex 13 Such optimization problems arise in almost every area of science and engineering. Whenever there is data to be analyzed, curves to be fitted, there is usually a linear or a non-linear least squares problem lurking in there somewhere.
15 Perhaps the simplest example of such a problem is the problem of Ordinary Linear Regression, where given observations $(x_1,y_1),\hdots, (x_k,y_k)$, we wish to find the line $y = mx + c$, that best explains $y$ as a function of $x$. One way to solve this problem is to find the solution to the following optimization problem
19 With a little bit of calculus, this problem can be solved easily by hand. But what if, instead of a line we were interested in a more complicated relationship between $x$ and $y$, say for example $y = e^{mx + c}$. Then the optimization problem becomes
further.tex 4 For a short but informative introduction to the subject we recommend the booklet by Madsel et al.~\cite{madsen2004methods}. For a general introduction to non-linear optimization we recommend the text by Nocedal \& Wright~\cite{nocedal2000numerical}. Bj{\"o}rck's book remains the seminal reference on least squares problems~\cite{bjorck1996numerical}. Trefethen \& Bau's book is our favourite text on introductory numerical linear algebra~\cite{trefethen1997numerical}. Triggs et al., provide a thorough coverage of the bundle adjustment problem~\cite{triggs-etal-1999}.
reference-overview.tex 9 Where $f_i(\cdot)$ is a cost function that depends on the parameter blocks $\left[x_{i_1}, \hdots , x_{i_k}\right]$ and $\rho_i$ is a loss function. In most optimization problems small groups of scalars occur together. For example the three components of a translation vector and the four components of the quaternion that define the pose of a camera. We refer to such a group of small scalars as a Parameter Block. Of course a parameter block can just have a single parameter.
18 These two steps are mostly independent of each other. This is by design. Modeling the optimization problem should not depend on how the solver and the user should be able to switch between various (…)
solving.tex 3 Effective use of Ceres requires some familiarity with the basic components of a nonlinear least squares solver, so before we describe how to configure the solver, we will begin by taking a brief look at how some of the core optimization algorithms in Ceres work and the various linear solvers and preconditioners that power it.
8 $ F(x) = \left[f_1(x), \hdots, f_{m}(x) \right]^{\top}$ be a $m$-dimensional function of $x$. We are interested in solving the following optimization problem~\footnote{At the level of the non-linear solver, the block and residual structure is not relevant, therefore our discussion here is in terms of an optimization problem defined over a state vector of size $n$.},
13 Here, the Jacobian $J(x)$ of $F(x)$ is an $m\times n$ matrix, where $J_{ij}(x) = \partial_j f_i(x)$ and the gradient vector $g(x) = \nabla \frac{1}{2}\|F(x)\|^2 = J(x)^\top F(x)$. Since the efficient global optimization of~\eqref{eq:nonlinsq} for general $F(x)$ is an intractable problem, we will have to settle for finding a local minimum.
15 The general strategy when solving non-linear optimization problems is to solve a sequence of approximations to the original problem~\cite{nocedal2000numerical}. At each iteration, the approximation is solved to determine a correction $\Delta x$ to the vector $x$. For non-linear least squares, an approximation can be constructed by using the linearization $F(x+\Delta x) \approx F(x) + J(x)\Delta x$, which leads to the following linear least squares problem:
49 The key computational step in a trust-region algorithm is the solution of the constrained optimization problem
61 It can be shown, that the solution to~\eqref{eq:trp} can be obtained by solving an unconstrained optimization of the form
79 The factorization methods are based on computing an exact solution of~\eqref{eq:lsqr} using a Cholesky or a QR factorization and lead to an exact step Levenberg-Marquardt algorithm. But it is not clear if an exact solution of~\eqref{eq:lsqr} is necessary at each step of the LM algorithm to solve~\eqref{eq:nonlinsq}. In fact, we have already seen evidence that this may not be the case, as~\eqref{eq:lsqr} is itself a regularized version of~\eqref{eq:linearapprox}. Indeed, it is possible to construct non-linear optimization algorithms in which the linearized problem is solved approximately. These algorithms are known as inexact Newton or truncated Newton methods~\cite{nocedal2000numerical}.
154 additional optimization step to estimate $a_1$ and $a_2$ exactly after
169 the $a_1$ and $a_2$ optimization problems will do. The only constrain
    [all...]
curvefitting.tex 4 The examples we have seen until now are simple optimization problems with no data. The original purpose of least squares and non-linear least squares analysis was fitting curves to data. It is only appropriate that we now consider an example of such a problem\footnote{The full code and data for this example can be found in
  /external/icu4c/tools/gennorm2/
n2builder.h 52 enum Optimization {
57 void setOptimization(Optimization opt) { optimization=opt; }
109 Optimization optimization; member in class:Normalizer2DataBuilder
  /external/llvm/lib/Target/ARM/AsmParser/
Android.mk 37 # Override the default optimization level to work around taking forever (~50m)
  /dalvik/vm/mterp/armv5te/
OP_INVOKE_OBJECT_INIT_RANGE.S 20 bne .L${opcode}_debugger @ Yes - skip optimization
  /dalvik/vm/mterp/mips/
OP_INVOKE_OBJECT_INIT_RANGE.S 28 bnez a1, .L${opcode}_debugger # Yes - skip optimization
  /dalvik/vm/mterp/x86/
OP_INVOKE_OBJECT_INIT_RANGE.S 22 jnz .L${opcode}_debugger # Yes - skip optimization
  /external/llvm/lib/Transforms/Scalar/
Android.mk 59 # Override the default optimization level to work around a SIGSEGV
  /dalvik/vm/compiler/codegen/arm/
LocalOptimizations.cpp 46 const char *optimization)
48 ALOGD("************ %s ************", optimization);
63 * optimization is scannng in the top-down order and the new instruction
  /dalvik/vm/compiler/codegen/mips/
LocalOptimizations.cpp 46 const char *optimization)
48 LOGD("************ %s ************", optimization);
63 * optimization is scannng in the top-down order and the new instruction
  /system/core/libpixelflinger/
Android.mk 42 # special optimization flags for pixelflinger
  /external/v8/test/mjsunit/
assert-opt-and-deopt.js 41 * The possible optimization states of a function. Must be in sync with the
168 // Let's trigger optimization for another type.
arguments-apply.js 109 // Make sure that the stack after the apply optimization is
top-level-assignments.js 28 // Testing that optimization of top-level object initialization doesn't
  /external/v8/test/mjsunit/compiler/
regress-serialized-slots.js 32 for (var i = 0; i < 10000; i++) { // Loop to trigger optimization.
  /external/v8/test/mjsunit/regress/
regress-1412.js 28 // Test that the apply with arguments optimization passes values
regress-1145.js 51 if (x == 0) fail(); // Hope to be inlined during optimization.
  /ndk/build/core/
add-application.mk 223 $(call ndk_log,Selecting optimization mode through Application.mk: $(APP_OPTIM))
226 $(call ndk_log,Selecting debug optimization mode (app is debuggable))
229 $(call ndk_log,Selecting release optimization mode (app is not debuggable))
  /external/valgrind/main/drd/scripts/
download-and-build-splash2.in 90 CFLAGS := \$(CFLAGS) -Wall -W -Wmissing-prototypes -Wmissing-declarations -Wredundant-decls -Wdisabled-optimization
  /external/v8/src/ia32/
stub-cache-ia32.cc 465 const CallOptimization& optimization,
481 Handle<JSFunction> function = optimization.constant_function();
487 Handle<CallHandlerInfo> api_call_info = optimization.api_call_info();
553 CallOptimization optimization(lookup);
554 if (optimization.is_constant_call()) {
556 holder, lookup, name, optimization, miss);
573 const CallOptimization& optimization,
575 ASSERT(optimization.is_constant_call());
581 if (optimization.is_simple_api_call() &&
583 depth1 = optimization.GetPrototypeDepthOfExpectedType
2182 GenerateFastApiCall(masm(), optimization, argc); local
    [all...]
  /external/v8/src/x64/
stub-cache-x64.cc 445 const CallOptimization& optimization,
461 Handle<JSFunction> function = optimization.constant_function();
467 Handle<CallHandlerInfo> api_call_info = optimization.api_call_info();
537 CallOptimization optimization(lookup);
538 if (optimization.is_constant_call()) {
540 holder, lookup, name, optimization, miss);
557 const CallOptimization& optimization,
559 ASSERT(optimization.is_constant_call());
565 if (optimization.is_simple_api_call() &&
567 depth1 = optimization.GetPrototypeDepthOfExpectedType
2006 GenerateFastApiCall(masm(), optimization, argc); local
    [all...]

Completed in 628 milliseconds

1 2 3