Home | History | Annotate | Download | only in nonstiff
      1 /*
      2  * Licensed to the Apache Software Foundation (ASF) under one or more
      3  * contributor license agreements.  See the NOTICE file distributed with
      4  * this work for additional information regarding copyright ownership.
      5  * The ASF licenses this file to You under the Apache License, Version 2.0
      6  * (the "License"); you may not use this file except in compliance with
      7  * the License.  You may obtain a copy of the License at
      8  *
      9  *      http://www.apache.org/licenses/LICENSE-2.0
     10  *
     11  * Unless required by applicable law or agreed to in writing, software
     12  * distributed under the License is distributed on an "AS IS" BASIS,
     13  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
     14  * See the License for the specific language governing permissions and
     15  * limitations under the License.
     16  */
     17 
     18 package org.apache.commons.math.ode.nonstiff;
     19 
     20 import org.apache.commons.math.linear.Array2DRowRealMatrix;
     21 import org.apache.commons.math.ode.DerivativeException;
     22 import org.apache.commons.math.ode.FirstOrderDifferentialEquations;
     23 import org.apache.commons.math.ode.IntegratorException;
     24 import org.apache.commons.math.ode.sampling.NordsieckStepInterpolator;
     25 import org.apache.commons.math.ode.sampling.StepHandler;
     26 import org.apache.commons.math.util.FastMath;
     27 
     28 
     29 /**
     30  * This class implements explicit Adams-Bashforth integrators for Ordinary
     31  * Differential Equations.
     32  *
     33  * <p>Adams-Bashforth methods (in fact due to Adams alone) are explicit
     34  * multistep ODE solvers. This implementation is a variation of the classical
     35  * one: it uses adaptive stepsize to implement error control, whereas
     36  * classical implementations are fixed step size. The value of state vector
     37  * at step n+1 is a simple combination of the value at step n and of the
     38  * derivatives at steps n, n-1, n-2 ... Depending on the number k of previous
     39  * steps one wants to use for computing the next value, different formulas
     40  * are available:</p>
     41  * <ul>
     42  *   <li>k = 1: y<sub>n+1</sub> = y<sub>n</sub> + h y'<sub>n</sub></li>
     43  *   <li>k = 2: y<sub>n+1</sub> = y<sub>n</sub> + h (3y'<sub>n</sub>-y'<sub>n-1</sub>)/2</li>
     44  *   <li>k = 3: y<sub>n+1</sub> = y<sub>n</sub> + h (23y'<sub>n</sub>-16y'<sub>n-1</sub>+5y'<sub>n-2</sub>)/12</li>
     45  *   <li>k = 4: y<sub>n+1</sub> = y<sub>n</sub> + h (55y'<sub>n</sub>-59y'<sub>n-1</sub>+37y'<sub>n-2</sub>-9y'<sub>n-3</sub>)/24</li>
     46  *   <li>...</li>
     47  * </ul>
     48  *
     49  * <p>A k-steps Adams-Bashforth method is of order k.</p>
     50  *
     51  * <h3>Implementation details</h3>
     52  *
     53  * <p>We define scaled derivatives s<sub>i</sub>(n) at step n as:
     54  * <pre>
     55  * s<sub>1</sub>(n) = h y'<sub>n</sub> for first derivative
     56  * s<sub>2</sub>(n) = h<sup>2</sup>/2 y''<sub>n</sub> for second derivative
     57  * s<sub>3</sub>(n) = h<sup>3</sup>/6 y'''<sub>n</sub> for third derivative
     58  * ...
     59  * s<sub>k</sub>(n) = h<sup>k</sup>/k! y(k)<sub>n</sub> for k<sup>th</sup> derivative
     60  * </pre></p>
     61  *
     62  * <p>The definitions above use the classical representation with several previous first
     63  * derivatives. Lets define
     64  * <pre>
     65  *   q<sub>n</sub> = [ s<sub>1</sub>(n-1) s<sub>1</sub>(n-2) ... s<sub>1</sub>(n-(k-1)) ]<sup>T</sup>
     66  * </pre>
     67  * (we omit the k index in the notation for clarity). With these definitions,
     68  * Adams-Bashforth methods can be written:
     69  * <ul>
     70  *   <li>k = 1: y<sub>n+1</sub> = y<sub>n</sub> + s<sub>1</sub>(n)</li>
     71  *   <li>k = 2: y<sub>n+1</sub> = y<sub>n</sub> + 3/2 s<sub>1</sub>(n) + [ -1/2 ] q<sub>n</sub></li>
     72  *   <li>k = 3: y<sub>n+1</sub> = y<sub>n</sub> + 23/12 s<sub>1</sub>(n) + [ -16/12 5/12 ] q<sub>n</sub></li>
     73  *   <li>k = 4: y<sub>n+1</sub> = y<sub>n</sub> + 55/24 s<sub>1</sub>(n) + [ -59/24 37/24 -9/24 ] q<sub>n</sub></li>
     74  *   <li>...</li>
     75  * </ul></p>
     76  *
     77  * <p>Instead of using the classical representation with first derivatives only (y<sub>n</sub>,
     78  * s<sub>1</sub>(n) and q<sub>n</sub>), our implementation uses the Nordsieck vector with
     79  * higher degrees scaled derivatives all taken at the same step (y<sub>n</sub>, s<sub>1</sub>(n)
     80  * and r<sub>n</sub>) where r<sub>n</sub> is defined as:
     81  * <pre>
     82  * r<sub>n</sub> = [ s<sub>2</sub>(n), s<sub>3</sub>(n) ... s<sub>k</sub>(n) ]<sup>T</sup>
     83  * </pre>
     84  * (here again we omit the k index in the notation for clarity)
     85  * </p>
     86  *
     87  * <p>Taylor series formulas show that for any index offset i, s<sub>1</sub>(n-i) can be
     88  * computed from s<sub>1</sub>(n), s<sub>2</sub>(n) ... s<sub>k</sub>(n), the formula being exact
     89  * for degree k polynomials.
     90  * <pre>
     91  * s<sub>1</sub>(n-i) = s<sub>1</sub>(n) + &sum;<sub>j</sub> j (-i)<sup>j-1</sup> s<sub>j</sub>(n)
     92  * </pre>
     93  * The previous formula can be used with several values for i to compute the transform between
     94  * classical representation and Nordsieck vector. The transform between r<sub>n</sub>
     95  * and q<sub>n</sub> resulting from the Taylor series formulas above is:
     96  * <pre>
     97  * q<sub>n</sub> = s<sub>1</sub>(n) u + P r<sub>n</sub>
     98  * </pre>
     99  * where u is the [ 1 1 ... 1 ]<sup>T</sup> vector and P is the (k-1)&times;(k-1) matrix built
    100  * with the j (-i)<sup>j-1</sup> terms:
    101  * <pre>
    102  *        [  -2   3   -4    5  ... ]
    103  *        [  -4  12  -32   80  ... ]
    104  *   P =  [  -6  27 -108  405  ... ]
    105  *        [  -8  48 -256 1280  ... ]
    106  *        [          ...           ]
    107  * </pre></p>
    108  *
    109  * <p>Using the Nordsieck vector has several advantages:
    110  * <ul>
    111  *   <li>it greatly simplifies step interpolation as the interpolator mainly applies
    112  *   Taylor series formulas,</li>
    113  *   <li>it simplifies step changes that occur when discrete events that truncate
    114  *   the step are triggered,</li>
    115  *   <li>it allows to extend the methods in order to support adaptive stepsize.</li>
    116  * </ul></p>
    117  *
    118  * <p>The Nordsieck vector at step n+1 is computed from the Nordsieck vector at step n as follows:
    119  * <ul>
    120  *   <li>y<sub>n+1</sub> = y<sub>n</sub> + s<sub>1</sub>(n) + u<sup>T</sup> r<sub>n</sub></li>
    121  *   <li>s<sub>1</sub>(n+1) = h f(t<sub>n+1</sub>, y<sub>n+1</sub>)</li>
    122  *   <li>r<sub>n+1</sub> = (s<sub>1</sub>(n) - s<sub>1</sub>(n+1)) P<sup>-1</sup> u + P<sup>-1</sup> A P r<sub>n</sub></li>
    123  * </ul>
    124  * where A is a rows shifting matrix (the lower left part is an identity matrix):
    125  * <pre>
    126  *        [ 0 0   ...  0 0 | 0 ]
    127  *        [ ---------------+---]
    128  *        [ 1 0   ...  0 0 | 0 ]
    129  *    A = [ 0 1   ...  0 0 | 0 ]
    130  *        [       ...      | 0 ]
    131  *        [ 0 0   ...  1 0 | 0 ]
    132  *        [ 0 0   ...  0 1 | 0 ]
    133  * </pre></p>
    134  *
    135  * <p>The P<sup>-1</sup>u vector and the P<sup>-1</sup> A P matrix do not depend on the state,
    136  * they only depend on k and therefore are precomputed once for all.</p>
    137  *
    138  * @version $Revision: 1073158 $ $Date: 2011-02-21 22:46:52 +0100 (lun. 21 fvr. 2011) $
    139  * @since 2.0
    140  */
    141 public class AdamsBashforthIntegrator extends AdamsIntegrator {
    142 
    143     /** Integrator method name. */
    144     private static final String METHOD_NAME = "Adams-Bashforth";
    145 
    146     /**
    147      * Build an Adams-Bashforth integrator with the given order and step control parameters.
    148      * @param nSteps number of steps of the method excluding the one being computed
    149      * @param minStep minimal step (must be positive even for backward
    150      * integration), the last step can be smaller than this
    151      * @param maxStep maximal step (must be positive even for backward
    152      * integration)
    153      * @param scalAbsoluteTolerance allowed absolute error
    154      * @param scalRelativeTolerance allowed relative error
    155      * @exception IllegalArgumentException if order is 1 or less
    156      */
    157     public AdamsBashforthIntegrator(final int nSteps,
    158                                     final double minStep, final double maxStep,
    159                                     final double scalAbsoluteTolerance,
    160                                     final double scalRelativeTolerance)
    161         throws IllegalArgumentException {
    162         super(METHOD_NAME, nSteps, nSteps, minStep, maxStep,
    163               scalAbsoluteTolerance, scalRelativeTolerance);
    164     }
    165 
    166     /**
    167      * Build an Adams-Bashforth integrator with the given order and step control parameters.
    168      * @param nSteps number of steps of the method excluding the one being computed
    169      * @param minStep minimal step (must be positive even for backward
    170      * integration), the last step can be smaller than this
    171      * @param maxStep maximal step (must be positive even for backward
    172      * integration)
    173      * @param vecAbsoluteTolerance allowed absolute error
    174      * @param vecRelativeTolerance allowed relative error
    175      * @exception IllegalArgumentException if order is 1 or less
    176      */
    177     public AdamsBashforthIntegrator(final int nSteps,
    178                                     final double minStep, final double maxStep,
    179                                     final double[] vecAbsoluteTolerance,
    180                                     final double[] vecRelativeTolerance)
    181         throws IllegalArgumentException {
    182         super(METHOD_NAME, nSteps, nSteps, minStep, maxStep,
    183               vecAbsoluteTolerance, vecRelativeTolerance);
    184     }
    185 
    186     /** {@inheritDoc} */
    187     @Override
    188     public double integrate(final FirstOrderDifferentialEquations equations,
    189                             final double t0, final double[] y0,
    190                             final double t, final double[] y)
    191         throws DerivativeException, IntegratorException {
    192 
    193         final int n = y0.length;
    194         sanityChecks(equations, t0, y0, t, y);
    195         setEquations(equations);
    196         resetEvaluations();
    197         final boolean forward = t > t0;
    198 
    199         // initialize working arrays
    200         if (y != y0) {
    201             System.arraycopy(y0, 0, y, 0, n);
    202         }
    203         final double[] yDot = new double[n];
    204 
    205         // set up an interpolator sharing the integrator arrays
    206         final NordsieckStepInterpolator interpolator = new NordsieckStepInterpolator();
    207         interpolator.reinitialize(y, forward);
    208 
    209         // set up integration control objects
    210         for (StepHandler handler : stepHandlers) {
    211             handler.reset();
    212         }
    213         setStateInitialized(false);
    214 
    215         // compute the initial Nordsieck vector using the configured starter integrator
    216         start(t0, y, t);
    217         interpolator.reinitialize(stepStart, stepSize, scaled, nordsieck);
    218         interpolator.storeTime(stepStart);
    219         final int lastRow = nordsieck.getRowDimension() - 1;
    220 
    221         // reuse the step that was chosen by the starter integrator
    222         double hNew = stepSize;
    223         interpolator.rescale(hNew);
    224 
    225         // main integration loop
    226         isLastStep = false;
    227         do {
    228 
    229             double error = 10;
    230             while (error >= 1.0) {
    231 
    232                 stepSize = hNew;
    233 
    234                 // evaluate error using the last term of the Taylor expansion
    235                 error = 0;
    236                 for (int i = 0; i < mainSetDimension; ++i) {
    237                     final double yScale = FastMath.abs(y[i]);
    238                     final double tol = (vecAbsoluteTolerance == null) ?
    239                                        (scalAbsoluteTolerance + scalRelativeTolerance * yScale) :
    240                                        (vecAbsoluteTolerance[i] + vecRelativeTolerance[i] * yScale);
    241                     final double ratio  = nordsieck.getEntry(lastRow, i) / tol;
    242                     error += ratio * ratio;
    243                 }
    244                 error = FastMath.sqrt(error / mainSetDimension);
    245 
    246                 if (error >= 1.0) {
    247                     // reject the step and attempt to reduce error by stepsize control
    248                     final double factor = computeStepGrowShrinkFactor(error);
    249                     hNew = filterStep(stepSize * factor, forward, false);
    250                     interpolator.rescale(hNew);
    251 
    252                 }
    253             }
    254 
    255             // predict a first estimate of the state at step end
    256             final double stepEnd = stepStart + stepSize;
    257             interpolator.shift();
    258             interpolator.setInterpolatedTime(stepEnd);
    259             System.arraycopy(interpolator.getInterpolatedState(), 0, y, 0, y0.length);
    260 
    261             // evaluate the derivative
    262             computeDerivatives(stepEnd, y, yDot);
    263 
    264             // update Nordsieck vector
    265             final double[] predictedScaled = new double[y0.length];
    266             for (int j = 0; j < y0.length; ++j) {
    267                 predictedScaled[j] = stepSize * yDot[j];
    268             }
    269             final Array2DRowRealMatrix nordsieckTmp = updateHighOrderDerivativesPhase1(nordsieck);
    270             updateHighOrderDerivativesPhase2(scaled, predictedScaled, nordsieckTmp);
    271             interpolator.reinitialize(stepEnd, stepSize, predictedScaled, nordsieckTmp);
    272 
    273             // discrete events handling
    274             interpolator.storeTime(stepEnd);
    275             stepStart = acceptStep(interpolator, y, yDot, t);
    276             scaled    = predictedScaled;
    277             nordsieck = nordsieckTmp;
    278             interpolator.reinitialize(stepEnd, stepSize, scaled, nordsieck);
    279 
    280             if (!isLastStep) {
    281 
    282                 // prepare next step
    283                 interpolator.storeTime(stepStart);
    284 
    285                 if (resetOccurred) {
    286                     // some events handler has triggered changes that
    287                     // invalidate the derivatives, we need to restart from scratch
    288                     start(stepStart, y, t);
    289                     interpolator.reinitialize(stepStart, stepSize, scaled, nordsieck);
    290                 }
    291 
    292                 // stepsize control for next step
    293                 final double  factor     = computeStepGrowShrinkFactor(error);
    294                 final double  scaledH    = stepSize * factor;
    295                 final double  nextT      = stepStart + scaledH;
    296                 final boolean nextIsLast = forward ? (nextT >= t) : (nextT <= t);
    297                 hNew = filterStep(scaledH, forward, nextIsLast);
    298 
    299                 final double  filteredNextT      = stepStart + hNew;
    300                 final boolean filteredNextIsLast = forward ? (filteredNextT >= t) : (filteredNextT <= t);
    301                 if (filteredNextIsLast) {
    302                     hNew = t - stepStart;
    303                 }
    304 
    305                 interpolator.rescale(hNew);
    306 
    307             }
    308 
    309         } while (!isLastStep);
    310 
    311         final double stopTime = stepStart;
    312         resetInternalState();
    313         return stopTime;
    314 
    315     }
    316 
    317 }
    318