\begin{page}{manpageXXe01}{NAG On-line Documentation: e01}
\beginscroll
\begin{verbatim}



     E01(3NAG)         Foundation Library (12/10/92)         E01(3NAG)



          E01 -- Interpolation                          Introduction -- E01
                                    Chapter E01
                                   Interpolation

          1. Scope of the Chapter

          This chapter is concerned with the interpolation of a function of
          one or two variables. When provided with the value of the
          function (and possibly one or more of its lowest-order
          derivatives) at each of a number of values of the variable(s),
          the routines provide either an interpolating function or an
          interpolated value. For some of the interpolating functions,
          there are supporting routines to evaluate, differentiate or
          integrate them.

          2. Background to the Problems

          In motivation and in some of its numerical processes, this
          chapter has much in common with Chapter E02 (Curve and Surface
          Fitting). For this reason, we shall adopt the same terminology
          and refer to dependent variable and independent variable(s)
          instead of function and variable(s). Where there is only one
          independent variable, we shall denote it by x and the dependent
          variable by y. Thus, in the basic problem considered in this
          chapter, we are given a set of distinct values x ,x ,...,x  of x
                                                          1  2      m
          and a corresponding set of values y ,y ,...,y  of y, and we shall
                                             1  2      m
          describe the problem as being one of interpolating the data
          points (x ,y ), rather than interpolating a function. In modern
                   r  r
          usage, however, interpolation can have either of two rather
          different meanings, both relevant to routines in this chapter.
          They are

          (a) the determination of a function of x which takes the value y
                                                                          r
              at x=x , for r=1,2,...,m (an interpolating function or
                    r
              interpolant),

          (b) the determination of the value (interpolated value or
              interpolate) of an interpolating function at any given value,
                  ^
              say x, of x within the range of the x  (so as to estimate the
                                                   r
                       ^
              value at x of the function underlying the data).

          The latter is the older meaning, associated particularly with the
          use of mathematical tables. The term 'function underlying the
          data', like the other terminology described above, is used so as
          to cover situations additional to those in which the data points
          have been computed from a known function, as with a mathematical
          table. In some contexts, the function may be unknown, perhaps
          representing the dependency of one physical variable on another,
          say temperature upon time.

          Whether the underlying function is known or unknown, the object
          of interpolation will usually be to approximate it to acceptable
          accuracy by a function which is easy to evaluate anywhere in some
          range of interest. Piecewise polynomials such as cubic splines
          (see Section 2.2 of the E02 Chapter Introduction for definitions
          of terms in this case), being easy to evaluate and also capable
          of approximating a wide variety of functions, are the types of
          function mostly used in this chapter as interpolating functions.

          Piecewise polynomials also, to a large extent, avoid the well-
          known problem of large unwanted fluctuations which can arise when
          interpolating a data set with a simple polynomial. Fluctuations
          can still arise but much less frequently and much less severely
          than with simple polynomials. Unwanted fluctuations are avoided
          altogether by a routine using piecewise cubic polynomials having
          only first derivative continuity. It is designed especially for
          monotonic data, but for other data still provides an interpolant
          which increases, or decreases, over the same intervals as the
          data.

          The concept of interpolation can be generalised in a number of
          ways. For example, we may be required to estimate the value of
                                             ^
          the underlying function at a value x outside the range of the
          data. This is the process of extrapolation. In general, it is a
          good deal less accurate than interpolation and is to be avoided
          whenever possible.

          Interpolation can also be extended to the case of two independent
          variables. We shall denote these by x and y, and the dependent
          variable by f. Methods used depend markedly on whether or not the
          data values of f are given at the intersections of a rectangular
          mesh in the (x,y)-plane. If they are, bicubic splines (see
          Section 2.3.2 of the E02 Chapter Introduction) are very suitable
          and usually very effective for the problem. For other cases,
          perhaps where the f values are quite arbitrarily scattered in the
          (x,y)-plane, polynomials and splines are not at all appropriate
          and special forms of interpolating function have to be employed.
          Many such forms have been devised and two of the most successful
          are in routines in this chapter. They both have continuity in
          first, but not higher, derivatives.

          2.1. References

          [1]   Froberg C E (1970) Introduction to Numerical Analysis.
                Addison-Wesley (2nd Edition).

          [2]   Dahlquist G and Bjork A (1974) Numerical Methods. Prentice-
                Hall.

          3. Recommendations on Choice and Use of Routines

          3.1. General

          Before undertaking interpolation, in other than the simplest
          cases, the user should seriously consider the alternative of
          using a routine from Chapter E02 to approximate the data by a
          polynomial or spline containing significantly fewer coefficients
          than the corresponding interpolating function. This approach is
          much less liable to produce unwanted fluctuations and so can
          often provide a better approximation to the function underlying
          the data.

          When interpolation is employed to approximate either an
          underlying function or its values, the user will need to be
          satisfied that the accuracy of approximation achieved is
          adequate. There may be a means for doing this which is particular
          to the application, or the routine used may itself provide a
          means. In other cases, one possibility is to repeat the
          interpolation using one or more extra data points, if they are
          available, or otherwise one or more fewer, and to compare the
          results. Other possibilities, if it is an interpolating function
          which is determined, are to examine the function graphically, if
          that gives sufficient accuracy, or to observe the behaviour of
          the differences in a finite-difference table, formed from
          evaluations of the interpolating function at equally-spaced
          values of x over the range of interest. The spacing should be
          small enough to cause the typical size of the differences to
          decrease as the order of difference increases.

          3.2. One Independent Variable

          E01BAF computes an interpolating cubic spline, using a particular
          choice for the set of knots which has proved generally
          satisfactory in practice. If the user wishes to choose a
          different set, a cubic spline routine from Chapter E02, namely
          E02BAF, may be used in its interpolating mode, setting NCAP7 = M+
          4 and all elements of the parameter W to unity. These routines
          provide the interpolating function in B-spline form (see Section
          2.2.2 in the E02 Chapter Introduction). Routines for evaluating,
          differentiating and integrating this form are discussed in
          Section 3.7 of the E02 Chapter Introduction.

          The cubic spline does not always avoid unwanted fluctuations,
          especially when the data show a steep slope close to a region of
          small slope, or when the data inadequately represent the
          underlying curve. In such cases, E01BEF can be very useful. It
          derives a piecewise cubic polynomial (with first derivative
          continuity) which, between any adjacent pair of data points,
          either increases all the way, or decreases all the way (or stays
          constant). It is especially suited to data which are monotonic
          over their whole range.

          In this routine, the interpolating function is represented simply
          by its value and first derivative at the data points. Supporting
          routines compute its value and first derivative elsewhere, as
          well as its definite integral over an arbitary interval.

          3.3. Two Independent Variables

          3.3.1.  Data on a rectangular mesh

          Given the value f   of the dependent variable f at the point
                           qr
          (x ,y ) in the plane of the independent variables x and y, for
            q  r
          each q=1,2,...,m and r=1,2,...,n (so that the points (x ,y ) lie
                                                                 q  r
          at the m*n intersections of a rectangular mesh), E01DAF computes
          an interpolating bicubic spline, using a particular choice for
          each of the spline's knot-set. This choice, the same as in E01BAF
          , has proved generally satisfactory in practice. If, instead, the
          user wishes to specify his own knots, a routine from Chapter E02,
          namely E02DAF, may be adapted (it is more cumbersome for the
          purpose, however, and much slower for larger problems). Using m
          and n in the above sense, the parameter M must be set to m*n, PX
          and PY must be set to m+4 and n+4 respectively and all elements
          of W should be set to unity. The recommended value for EPS is
          zero.

          3.3.2.  Arbitrary data

          As remarked at the end of Section 2, special types of
          interpolating are required for this problem, which can often be
          difficult to solve satisfactorily. Two of the most successful are
          employed in E01SAF and E01SEF, the two routines which (with their
          respective evaluation routines E01SBF and E01SFF) are provided
          for the problem. Definitions can be found in the routine
          documents. Both interpolants have first derivative continuity and
          are 'local', in that their value at any point depends only on
          data in the immediate neighbourhood of the point. This latter
          feature is necessary for large sets of data to avoid prohibitive
          computing time.

          The relative merits of the two methods vary with the data and it
          is not possible to predict which will be the better in any
          particular case.

          3.4. Index

          Derivative, of interpolant from E01BEF                     E01BGF

          Evaluation, of interpolant
               from E01BEF                                           E01BFF
               from E01SAF                                           E01SBF
               from E01SEF                                           E01SFF
          Extrapolation, one variable                                E01BEF
          Integration (definite) of interpolant from E01BEF          E01BHF
          Interpolated values, one variable, from interpolant from   E01BFF
          E01BEF
                                                                     E01BGF
          Interpolated values, two variables,
               from interpolant from E01SAF                          E01SBF
               from interpolant from E01SEF                          E01SFF
          Interpolating function, one variable,
               cubic spline                                          E01BAF
               other piecewise polynomial                            E01BEF
          Interpolating function, two variables
               bicubic spline                                        E01DAF
               other piecewise polynomial                            E01SAF
               modified Shepard method                               E01SEF


          E01 -- Interpolation                              Contents -- E01
          Chapter E01

          Interpolation

          E01BAF  Interpolating functions, cubic spline interpolant, one
                  variable

          E01BEF  Interpolating functions, monotonicity-preserving,
                  piecewise cubic Hermite, one variable

          E01BFF  Interpolated values, interpolant computed by E01BEF,
                  function only, one variable,

          E01BGF  Interpolated values, interpolant computed by E01BEF,
                  function and 1st derivative, one variable

          E01BHF  Interpolated values, interpolant computed by E01BEF,
                  definite integral, one variable

          E01DAF  Interpolating functions, fitting bicubic spline, data on
                  rectangular grid

          E01SAF  Interpolating functions, method of Renka and Cline, two
                  variables

          E01SBF  Interpolated values, evaluate interpolant computed by
                  E01SAF, two variables

          E01SEF  Interpolating functions, modified Shepard's method, two
                  variables

          E01SFF  Interpolated values, evaluate interpolant computed by
                  E01SEF, two variables

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01baf}{NAG On-line Documentation: e01baf}
\beginscroll
\begin{verbatim}



     E01BAF(3NAG)      Foundation Library (12/10/92)      E01BAF(3NAG)



          E01 -- Interpolation                                       E01BAF
                  E01BAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01BAF determines a cubic spline interpolant to a given set of
          data.

          2. Specification

                 SUBROUTINE E01BAF (M, X, Y, LAMDA, C, LCK, WRK, LWRK,
                1                   IFAIL)
                 INTEGER          M, LCK, LWRK, IFAIL
                 DOUBLE PRECISION X(M), Y(M), LAMDA(LCK), C(LCK), WRK(LWRK)

          3. Description

          This routine determines a cubic spline s(x), defined in the range
          x <=x<=x , which interpolates (passes exactly through) the set of
           1      m
          data points (x ,y ), for i=1,2,...,m, where m>=4 and x <x <...<x
                        i  i                                    1  2      m
          end conditions are not imposed. The spline interpolant chosen has
          m-4 interior knots (lambda) ,(lambda) ,...,(lambda) , which are
                                     5         6             m
          set to the values of x ,x ,...,x    respectively. This spline is
                                3  4      m-2
          represented in its B-spline form (see Cox [1]):

                                        m
                                        --
                                  s(x)= >  c N (x),
                                        --  i i
                                        i=1

          where N (x) denotes the normalised B-Spline of degree 3, defined
                 i
          upon the knots (lambda) ,(lambda)   ,...,(lambda)   , and c
                                 i         i+1             i+4       i
          denotes its coefficient, whose value is to be determined by the
          routine.

          The use of B-splines requires eight additional knots (lambda) ,
                                                                       1
          (lambda) , (lambda) , (lambda) , (lambda)   , (lambda)   ,
                  2          3          4          m+1          m+2
          (lambda)    and (lambda)    to be specified; the routine sets the
                  m+3             m+4
          first four of these to x  and the last four to x .
                                  1                       m

          The algorithm for determining the coefficients is as described in
          [1] except that QR factorization is used instead of LU
          decomposition. The implementation of the algorithm involves
          setting up appropriate information for the related routine E02BAF
          followed by a call of that routine. (For further details of
          E02BAF, see the routine document.)

          Values of the spline interpolant, or of its derivatives or
          definite integral, can subsequently be computed as detailed in
          Section 8.

          4. References

          [1]   Cox M G (1975) An Algorithm for Spline Interpolation. J.
                Inst. Math. Appl. 15 95--108.

          [2]   Cox M G (1977) A Survey of Numerical Methods for Data and
                Function Approximation. The State of the Art in Numerical
                Analysis. (ed D A H Jacobs) Academic Press. 627--668.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: m, the number of data points. Constraint: M >= 4.

           2:  X(M) -- DOUBLE PRECISION array                         Input
               On entry: X(i) must be set to x , the ith data value of the
                                              i
               independent variable x, for i=1,2,...,m. Constraint: X(i) <
               X(i+1), for i=1,2,...,M-1.

           3:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: Y(i) must be set to y , the ith data value of the
                                              i
               dependent variable y, for i=1,2,...,m.

           4:  LAMDA(LCK) -- DOUBLE PRECISION array                  Output
               On exit: the value of (lambda) , the ith knot, for
                                             i
               i=1,2,...,m+4.

           5:  C(LCK) -- DOUBLE PRECISION array                      Output
               On exit: the coefficient c  of the B-spline N (x), for
                                         i                  i
               i=1,2,...,m. The remaining elements of the array are not
               used.

           6:  LCK -- INTEGER                                         Input
               On entry:
               the dimension of the arrays LAMDA and C as declared in the
               (sub)program from which E01BAF is called.
               Constraint: LCK >= M + 4.

           7:  WRK(LWRK) -- DOUBLE PRECISION array                Workspace

           8:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E01BAF is called.
               Constraint: LWRK>=6*M+16.

           9:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               On entry M < 4,

               or       LCK<M+4,

               or       LWRK<6*M+16.

          IFAIL= 2
               The X-values fail to satisfy the condition

               X(1) < X(2) < X(3) <... < X(M).

          7. Accuracy

          The rounding errors incurred are such that the computed spline is
          an exact interpolant for a slightly perturbed set of ordinates
          y +(delta)y . The ratio of the root-mean-square value of the
           i         i
          (delta)y  to that of the y  is no greater than a small multiple
                  i                 i
          of the relative machine precision.

          8. Further Comments

          The time taken by the routine is approximately proportional to m.

          All the x  are used as knot positions except x  and x   . This
                   i                                    2      m-1
          choice of knots (see Cox [2]) means that s(x) is composed of m-3
          cubic arcs as follows. If m=4, there is just a single arc space
          spanning the whole interval x  to x . If m>=5, the first and last
                                       1     4
          arcs span the intervals x  to x  and x    to x  respectively.
                                   1     3      m-2     m
          Additionally if m>=6, the ith arc, for i=2,3,...,m-4 spans the
          interval x    to x   .
                    i+1     i+2

          After the call

                 CALL E01BAF (M, X, Y, LAMDA, C, LCK, WRK, LWRK, IFAIL)

          the following operations may be carried out on the interpolant
          s(x).

          The value of s(x) at x = XVAL can be provided in the real
          variable SVAL by the call

                 CALL E02BBF (M+4, LAMDA, C, XVAL, SVAL, IFAIL)

          The values of s(x) and its first three derivatives at x = XVAL
          can be provided in the real array SDIF of dimension 4, by the
          call

                 CALL E02BCF (M+4, LAMDA, C, XVAL, LEFT, SDIF, IFAIL)

          Here LEFT must specify whether the left- or right-hand value of
          the third derivative is required (see E02BCF for details).

          The value of the integral of s(x) over the range x  to x  can be
                                                            1     m
          provided in the real variable SINT by

                 CALL E02BDF (M+4, LAMDA, C, SINT, IFAIL)

          9. Example

          The example program sets up data from 7 values of the exponential
          function in the interval 0 to 1. E01BAF is then called to compute
          a spline interpolant to these data.

          The spline is evaluated by E02BBF, at the data points and at
          points halfway between each adjacent pair of data points, and the
                                           x
          spline values and the values of e  are printed out.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01bef}{NAG On-line Documentation: e01bef}
\beginscroll
\begin{verbatim}



     E01BEF(3NAG)      Foundation Library (12/10/92)      E01BEF(3NAG)



          E01 -- Interpolation                                       E01BEF
                  E01BEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01BEF computes a monotonicity-preserving piecewise cubic Hermite
          interpolant to a set of data points.

          2. Specification

                 SUBROUTINE E01BEF (N, X, F, D, IFAIL)
                 INTEGER          N, IFAIL
                 DOUBLE PRECISION X(N), F(N), D(N)

          3. Description

          This routine estimates first derivatives at the set of data
          points (x ,f ), for r=1,2,...,n, which determine a piecewise
                   r  r
          cubic Hermite interpolant to the data, that preserves
          monotonicity over ranges where the data points are monotonic. If
          the data points are only piecewise monotonic, the interpolant
          will have an extremum at each point where monotonicity switches
          direction. The estimates of the derivatives are computed by a
          formula due to Brodlie, which is described in Fritsch and Butland
          [1], with suitable changes at the boundary points.

          The routine is derived from routine PCHIM in Fritsch [2].

          Values of the computed interpolant, and of its first derivative
          and definite integral, can subsequently be computed by calling
          E01BFF, E01BGF and E01BHF, as described in Section 8

          4. References

          [1]   Fritsch F N and Butland J (1984) A Method for Constructing
                Local Monotone Piecewise Cubic Interpolants. SIAM J. Sci.
                Statist. Comput. 5 300--304.

          [2]   Fritsch F N (1982) PCHIP Final Specifications. Report UCID-
                30194. Lawrence Livermore National Laboratory.

          5. Parameters

           1:  N -- INTEGER                                           Input
               On entry: n, the number of data points. Constraint: N >= 2.

           2:  X(N) -- DOUBLE PRECISION array                         Input
               On entry: X(r) must be set to x , the rth value of the
                                              r
               independent variable (abscissa), for r=1,2,...,n.
               Constraint: X(r) < X(r+1).

           3:  F(N) -- DOUBLE PRECISION array                         Input
               On entry: F(r) must be set to f , the rth value of the
                                              r
               dependent variable (ordinate), for r=1,2,...,n.

           4:  D(N) -- DOUBLE PRECISION array                        Output
               On exit: estimates of derivatives at the data points. D(r)
               contains the derivative at X(r).

           5:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 2.

          IFAIL= 2
               The values of X(r), for r=1,2,...,N, are not in strictly
               increasing order.

          7. Accuracy

          The computational errors in the array D should be negligible in
          most practical situations.

          8. Further Comments

          The time taken by the routine is approximately proportional to n.

          The values of the computed interpolant at the points PX(i), for
          i=1,2,...,M, may be obtained in the real array PF, of length at
          least M, by the call:


                  CALL E01BFF(N,X,F,D,M,PX,PF,IFAIL)

          where N, X and F are the input parameters to E01BEF and D is the
          output parameter from E01BEF.

          The values of the computed interpolant at the points PX(i), for i
          = 1,2,...,M, together with its first derivatives, may be obtained
          in the real arrays PF and PD, both of length at least M, by the
          call:

                  CALL E01BGF(N,X,F,D,M,PX,PF,PD,IFAIL)

          where N, X, F and D are as described above.

          The value of the definite integral of the interpolant over the
          interval A to B can be obtained in the real variable PINT by the
          call:

                  CALL E01BHF(N,X,F,D,A,B,PINT,IFAIL)

          where N, X, F and D are as described above.

          9. Example

          This example program reads in a set of data points, calls E01BEF
          to compute a piecewise monotonic interpolant, and then calls
          E01BFF to evaluate the interpolant at equally spaced points.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01bff}{NAG On-line Documentation: e01bff}
\beginscroll
\begin{verbatim}



     E01BFF(3NAG)      Foundation Library (12/10/92)      E01BFF(3NAG)



          E01 -- Interpolation                                       E01BFF
                  E01BFF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01BFF evaluates a piecewise cubic Hermite interpolant at a set
          of points.

          2. Specification

                 SUBROUTINE E01BFF (N, X, F, D, M, PX, PF, IFAIL)
                 INTEGER          N, M, IFAIL
                 DOUBLE PRECISION X(N), F(N), D(N), PX(M), PF(M)

          3. Description

          This routine evaluates a piecewise cubic Hermite interpolant, as
          computed by E01BEF, at the points PX(i), for i=1,2,...,m. If any
          point lies outside the interval from X(1) to X(N), a value is
          extrapolated from the nearest extreme cubic, and a warning is
          returned.

          The routine is derived from routine PCHFE in Fritsch [1].

          4. References

          [1]   Fritsch F N (1982) PCHIP Final Specifications. Report UCID-
                30194. Lawrence Livermore National Laboratory.

          5. Parameters

           1:  N -- INTEGER                                           Input

           2:  X(N) -- DOUBLE PRECISION array                         Input

           3:  F(N) -- DOUBLE PRECISION array                         Input

           4:  D(N) -- DOUBLE PRECISION array                         Input
               On entry: N, X, F and D must be unchanged from the previous
               call of E01BEF.

           5:  M -- INTEGER                                           Input
               On entry: m, the number of points at which the interpolant
               is to be evaluated. Constraint: M >= 1.

           6:  PX(M) -- DOUBLE PRECISION array                        Input
               On entry: the m values of x at which the interpolant is to
               be evaluated.

           7:  PF(M) -- DOUBLE PRECISION array                       Output
               On exit: PF(i) contains the value of the interpolant
               evaluated at the point PX(i), for i=1,2,...,m.

           8:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 2.

          IFAIL= 2
               The values of X(r), for r = 1,2,...,N, are not in strictly
               increasing order.

          IFAIL= 3
               On entry M < 1.

          IFAIL= 4
               At least one of the points PX(i), for i = 1,2,...,M, lies
               outside the interval [X(1),X(N)], and extrapolation was
               performed at all such points. Values computed at such points
               may be very unreliable.

          7. Accuracy

          The computational errors in the array PF should be negligible in
          most practical situations.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          the number of evaluation points, m. The evaluation will be most
          efficient if the elements of PX are in non-decreasing order (or,
          more generally, if they are grouped in increasing order of the
          intervals [X(r-1),X(r)]). A single call of E01BFF with m>1 is
          more efficient than several calls with m=1.

          9. Example

          This example program reads in values of N, X, F and D, and then
          calls E01BFF to evaluate the interpolant at equally spaced
          points.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01bgf}{NAG On-line Documentation: e01bgf}
\beginscroll
\begin{verbatim}



     E01BGF(3NAG)      Foundation Library (12/10/92)      E01BGF(3NAG)



          E01 -- Interpolation                                       E01BGF
                  E01BGF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01BGF evaluates a piecewise cubic Hermite interpolant and its
          first derivative at a set of points.

          2. Specification

                 SUBROUTINE E01BGF (N, X, F, D, M, PX, PF, PD, IFAIL)
                 INTEGER          N, M, IFAIL
                 DOUBLE PRECISION X(N), F(N), D(N), PX(M), PF(M), PD(M)

          3. Description

          This routine evaluates a piecewise cubic Hermite interpolant, as
          computed by E01BEF, at the points PX(i), for i=1,2,...,m. The
          first derivatives at the points are also computed. If any point
          lies outside the interval from X(1) to X(N), values of the
          interpolant and its derivative are extrapolated from the nearest
          extreme cubic, and a warning is returned.

          If values of the interpolant only, and not of its derivative, are
          required, E01BFF should be used.

          The routine is derived from routine PCHFD in Fritsch [1].

          4. References

          [1]   Fritsch F N (1982) PCHIP Final Specifications. Report UCID-
                30194. Lawrence Livermore National Laboratory.

          5. Parameters

           1:  N -- INTEGER                                           Input

           2:  X(N) -- DOUBLE PRECISION array                         Input

           3:  F(N) -- DOUBLE PRECISION array                         Input

           4:  D(N) -- DOUBLE PRECISION array                         Input
               On entry: N, X, F and D must be unchanged from the previous
               call of E01BEF.

           5:  M -- INTEGER                                           Input
               On entry: m, the number of points at which the interpolant
               is to be evaluated. Constraint: M >= 1.

           6:  PX(M) -- DOUBLE PRECISION array                        Input
               On entry: the m values of x at which the interpolant is to
               be evaluated.

           7:  PF(M) -- DOUBLE PRECISION array                       Output
               On exit: PF(i) contains the value of the interpolant
               evaluated at the point PX(i), for i=1,2,...,m.

           8:  PD(M) -- DOUBLE PRECISION array                       Output
               On exit: PD(i) contains the first derivative of the
               interpolant evaluated at the point PX(i), for i=1,2,...,m.

           9:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 2.

          IFAIL= 2
               The values of X(r), for r = 1,2,...,N, are not in strictly
               increasing order.

          IFAIL= 3
               On entry M < 1.

          IFAIL= 4
               At least one of the points PX(i), for i = 1,2,...,M, lies
               outside the interval [X(1),X(N)], and extrapolation was
               performed at all such points. Values computed at these
               points may be very unreliable.

          7. Accuracy

          The computational errors in the arrays PF and PD should be
          negligible in most practical situations.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          the number of evaluation points, m. The evaluation will be most
          efficient if the elements of PX are in non-decreasing order (or,
          more generally, if they are grouped in increasing order of the
          intervals [X(r-1),X(r)]). A single call of E01BGF with m>1 is
          more efficient than several calls with m=1.

          9. Example

          This example program reads in values of N, X, F and D, and calls
          E01BGF to compute the values of the interpolant and its
          derivative at equally spaced points.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01bhf}{NAG On-line Documentation: e01bhf}
\beginscroll
\begin{verbatim}



     E01BHF(3NAG)      Foundation Library (12/10/92)      E01BHF(3NAG)



          E01 -- Interpolation                                       E01BHF
                  E01BHF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01BHF evaluates the definite integral of a piecewise cubic
          Hermite interpolant over the interval [a,b].

          2. Specification

                 SUBROUTINE E01BHF (N, X, F, D, A, B, PINT, IFAIL)
                 INTEGER          N, IFAIL
                 DOUBLE PRECISION X(N), F(N), D(N), A, B, PINT

          3. Description

          This routine evaluates the definite integral of a piecewise cubic
          Hermite interpolant, as computed by E01BEF, over the interval
          [a,b].

          If either a or b lies outside the interval from X(1) to X(N)
          computation of the integral involves extrapolation and a warning
          is returned.

          The routine is derived from routine PCHIA in Fritsch [1].

          4. References

          [1]   Fritsch F N (1982) PCHIP Final Specifications. Report UCID-
                30194. Lawrence Livermore National Laboratory .

          5. Parameters

           1:  N -- INTEGER                                           Input

           2:  X(N) -- DOUBLE PRECISION array                         Input

           3:  F(N) -- DOUBLE PRECISION array                         Input

           4:  D(N) -- DOUBLE PRECISION array                         Input
               On entry: N, X, F and D must be unchanged from the previous
               call of E01BEF.

           5:  A -- DOUBLE PRECISION                                  Input

           6:  B -- DOUBLE PRECISION                                  Input
               On entry: the interval [a,b] over which integration is to
               be performed.

           7:  PINT -- DOUBLE PRECISION                              Output
               On exit: the value of the definite integral of the
               interpolant over the interval [a,b].

           8:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 2.

          IFAIL= 2
               The values of X(r), for r = 1,2,...,N, are not in strictly
               increasing order.

          IFAIL= 3
               On entry at least one of A or B lies outside the interval [X
               (1),X(N)], and extrapolation was performed to compute the
               integral. The value returned is therefore unreliable.

          7. Accuracy

          The computational error in the value returned for PINT should be
          negligible in most practical situations.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          the number of data points included within the interval [a,b].

          9. Example

          This example program reads in values of N, X, F and D. It then
          reads in pairs of values for A and B, and evaluates the definite
          integral of the interpolant over the interval [A,B] until end-of-
          file is reached.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01daf}{NAG On-line Documentation: e01daf}
\beginscroll
\begin{verbatim}



     E01DAF(3NAG)      Foundation Library (12/10/92)      E01DAF(3NAG)



          E01 -- Interpolation                                       E01DAF
                  E01DAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01DAF computes a bicubic spline interpolating surface through a
          set of data values, given on a rectangular grid in the x-y plane.

          2. Specification

                 SUBROUTINE E01DAF (MX, MY, X, Y, F, PX, PY, LAMDA, MU, C,
                1                   WRK, IFAIL)
                 INTEGER          MX, MY, PX, PY, IFAIL
                 DOUBLE PRECISION X(MX), Y(MY), F(MX*MY), LAMDA(MX+4), MU(MX
                1                 +4), C(MX*MY), WRK((MX+6)*(MY+6))

          3. Description

          This routine determines a bicubic spline interpolant to the set
          of data points (x ,y ,f   ), for q=1,2,...,m ; r=1,2,...,m . The
                           q  r  q,r                  x             y
          spline is given in the B-spline representation

                                    m   m
                                     x   y
                                    --  --
                            s(x,y)= >   >  c  M (x)N (y),
                                    --  --  ij i    j
                                    i=1 j=1

          such that

                                   s(x ,y )=f   ,
                                      q  r   q,r

          where M (x) and N (y) denote normalised cubic B-splines, the
                 i         j
          former defined on the knots (lambda)  to (lambda)    and the
                                              i            i+4
          latter on the knots (mu)  to (mu)   , and the c   are the spline
                                  j        j+4           ij
          coefficients. These knots, as well as the coefficients, are
          determined by the routine, which is derived from the routine
          B2IRE in Anthony et al[1]. The method used is described in
          Section 8.2.

          For further information on splines, see Hayes and Halliday [4]
          for bicubic splines and de Boor [3] for normalised B-splines.

          Values of the computed spline can subsequently be obtained by
          calling E02DEF or E02DFF as described in Section 8.3.

          4. References

          [1]   Anthony G T, Cox M G and Hayes J G (1982) DASL - Data
                Approximation Subroutine Library. National Physical
                Laboratory.

          [2]   Cox M G (1975) An Algorithm for Spline Interpolation. J.
                Inst. Math. Appl. 15 95--108.

          [3]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [4]   Hayes J G and Halliday J (1974) The Least-squares Fitting of
                Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
                Appl. 14 89--103.

          5. Parameters

           1:  MX -- INTEGER                                          Input

           2:  MY -- INTEGER                                          Input
               On entry: MX and MY must specify m  and m  respectively,
                                                 x      y
               the number of points along the x and y axis that define the
               rectangular grid. Constraint: MX >= 4 and MY >= 4.

           3:  X(MX) -- DOUBLE PRECISION array                        Input

           4:  Y(MY) -- DOUBLE PRECISION array                        Input
               On entry: X(q) and Y(r) must contain x , for q=1,2,...,m ,
                                                     q                 x
               and y , for r=1,2,...,m , respectively. Constraints:
                    r                 y
                    X(q) < X(q+1), for q=1,2,...,m -1,
                                                  x

                    Y(r) < Y(r+1), for r=1,2,...,m -1.
                                                  y

           5:  F(MX*MY) -- DOUBLE PRECISION array                     Input
               On entry: F(m *(q-1)+r) must contain f   , for q=1,2,...,m ;
                            y                        q,r                 x
               r=1,2,...,m .
                          y

           6:  PX -- INTEGER                                         Output

           7:  PY -- INTEGER                                         Output
               On exit: PX and PY contain m +4 and m +4, the total number
                                           x        y
               of knots of the computed spline with respect to the x and y
               variables, respectively.

           8:  LAMDA(MX+4) -- DOUBLE PRECISION array                 Output

           9:  MU(MY+4) -- DOUBLE PRECISION array                    Output
               On exit: LAMDA contains the complete set of knots (lambda)
                                                                         i
               associated with the x variable, i.e., the interior knots
               LAMDA(5), LAMDA(6), ..., LAMDA(PX-4), as well as the
               additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA(4) =
               X(1) and LAMDA(PX-3) = LAMDA(PX-2) = LAMDA(PX-1) = LAMDA(PX)
               = X(MX) needed for the B-spline representation. MU contains
               the corresponding complete set of knots (mu)  associated
                                                           i
               with the y variable.

          10:  C(MX*MY) -- DOUBLE PRECISION array                    Output
               On exit: the coefficients of the spline interpolant. C(
               m *(i-1)+j) contains the coefficient c   described in
                y                                    ij
               Section 3.

          11:  WRK((MX+6)*(MY+6)) -- DOUBLE PRECISION array       Workspace

          12:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry MX < 4,

               or       MY < 4.

          IFAIL= 2
               On entry either the values in the X array or the values in
               the Y array are not in increasing order.

          IFAIL= 3
               A system of linear equations defining the B-spline
               coefficients was singular; the problem is too ill-
               conditioned to permit solution.

          7. Accuracy

          The main sources of rounding errors are in steps (2), (3), (6)
          and (7) of the algorithm described in Section 8.2. It can be
          shown (Cox [2]) that the matrix A  formed in step (2) has
                                           x
          elements differing relatively from their true values by at most a
          small multiple of 3(epsilon), where (epsilon) is the machine
          precision. A  is 'totally positive', and a linear system with
                      x
          such a coefficient matrix can be solved quite safely by
          elimination without pivoting. Similar comments apply to steps (6)
          and (7). Thus the complete process is numerically stable.

          8. Further Comments

          8.1. Timing

          The time taken by this routine is approximately proportional to
          m m .
           x y

          8.2. Outline of method used

          The process of computing the spline consists of the following
          steps:

          (1)   choice of the interior x-knots (lambda) , (lambda) ,...,
                                                       5          6
                (lambda)   as (lambda) =x   , for i=5,6,...,m ,
                        m             i  i-2                 x
                         x

          (2)   formation of the system
                                          A E=F,
                                           x
                where A  is a band matrix of order m  and bandwidth 4,
                       x                            x
                containing in its qth row the values at x  of the B-splines
                                                         q
                in x, F is the m  by m  rectangular matrix of values f   ,
                                x     y                               q,r
                and E denotes an m  by m  rectangular matrix of
                                  x     y
                intermediate coefficients,

          (3)   use of Gaussian elimination to reduce this system to band
                triangular form,

          (4)   solution of this triangular system for E,

          (5)   choice of the interior y knots (mu) , (mu) ,...,(mu)   as
                                                   5      6         m
                                                                     y
                (mu) =y   , for i=5,6,...,m ,
                    i  i-2                 y

          (6)   formation of the system
                                            T  T
                                         A C =E ,
                                          y
                where A  is the counterpart of A  for the y variable, and C
                       y                        x
                denotes the m  by m  rectangular matrix of values of c  ,
                             x     y                                  ij

          (7)   use of Gaussian elimination to reduce this system to band
                triangular form,

                                                        T
          (8)   solution of this triangular system for C  and hence C.

          For computational convenience, steps (2) and (3), and likewise
          steps (6) and (7), are combined so that the formation of A  and
                                                                    x
          A  and the reductions to triangular form are carried out one row
           y
          at a time.

          8.3. Evaluation of Computed Spline

          The values of the computed spline at the points (TX(r),TY(r)),
          for r = 1,2,...,N, may be obtained in the double precision array
          FF, of length at least N, by the following call:


                IFAIL = 0
                CALL E02DEF(N,PX,PY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)

          where PX, PY, LAMDA, MU and C are the output parameters of E01DAF
          , WRK is a double precision workspace array of length at least
          PY-4, and IWRK is an integer workspace array of length at least
          PY-4.

          To evaluate the computed spline on an NX by NY rectangular grid
          of points in the x-y plane, which is defined by the x co-
          ordinates stored in TX(q), for q = 1,2,...,NX, and the y co-
          ordinates stored in TY(r), for r = 1,2,...,NY, returning the
          results in the double precision array FG which is of length at
          least NX*NY, the following call may be used:


                 IFAIL = 0
                 CALL E02DFF(NX,NY,PX,PY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
                *            IWRK,LIWRK,IFAIL)

          where PX, PY, LAMDA, MU and C are the output parameters of E01DAF
          , WRK is a double precision workspace array of length at least
          LWRK = min(NWRK1,NWRK2), NWRK1 = NX*4+PX, NWRK2 = NY*4+PY, and
          IWRK is an integer workspace array of length at least LIWRK = NY
          + PY - 4 if NWRK1 > NWRK2, or NX + PX - 4 otherwise. The result
          of the spline evaluated at grid point (q,r) is returned in
          element (NY*(q-1)+r) of the array FG.

          9. Example

          This program reads in values of m , x  for q=1,2,...,m , m  and
                                           x   q                x   y
          y  for r=1,2,...,m , followed by values of the ordinates f
           r                y                                       q,r
          defined at the grid points (x ,y ). It then calls E01DAF to
                                       q  r
          compute a bicubic spline interpolant of the data values, and
          prints the values of the knots and B-spline coefficients. Finally
          it evaluates the spline at a small sample of points on a
          rectangular grid.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01saf}{NAG On-line Documentation: e01saf}
\beginscroll
\begin{verbatim}



     E01SAF(3NAG)      Foundation Library (12/10/92)      E01SAF(3NAG)



          E01 -- Interpolation                                       E01SAF
                  E01SAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01SAF generates a two-dimensional surface interpolating a set of
          scattered data points, using the method of Renka and Cline.

          2. Specification

                 SUBROUTINE E01SAF (M, X, Y, F, TRIANG, GRADS, IFAIL)
                 INTEGER          M, TRIANG(7*M), IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), GRADS(2,M)

          3. Description

          This routine constructs an interpolating surface F(x,y) through a
          set of m scattered data points (x ,y ,f ), for r=1,2,...,m, using
                                           r  r  r
          a method due to Renka and Cline. In the (x,y) plane, the data
          points must be distinct. The constructed surface is continuous
          and has continuous first derivatives.

          The method involves firstly creating a triangulation with all the
          (x,y) data points as nodes, the triangulation being as nearly
          equiangular as possible (see Cline and Renka [1]). Then gradients
          in the x- and y-directions are estimated at node r, for
          r=1,2,...,m, as the partial derivatives of a quadratic function
          of x and y which interpolates the data value f , and which fits
                                                        r
          the data values at nearby nodes (those within a certain distance
          chosen by the algorithm) in a weighted least-squares sense. The
          weights are chosen such that closer nodes have more influence
          than more distant nodes on derivative estimates at node r. The
          computed partial derivatives, with the f  values, at the three
                                                  r
          nodes of each triangle define a piecewise polynomial surface of a
          certain form which is the interpolant on that triangle. See Renka
          and Cline [4] for more detailed information on the algorithm, a
          development of that by Lawson [2]. The code is derived from Renka
          [3].

          The interpolant F(x,y) can subsequently be evaluated at any point
          (x,y) inside or outside the domain of the data by a call to
          E01SBF. Points outside the domain are evaluated by extrapolation.

          4. References

          [1]   Cline A K and Renka R L (1984) A Storage-efficient Method
                for Construction of a Thiessen Triangulation. Rocky Mountain
                J. Math. 14 119--139.

                                                1
          [2]   Lawson C L (1977) Software for C  Surface Interpolation.
                Mathematical Software III. (ed J R Rice) Academic Press.
                161--194.

          [3]   Renka R L (1984) Algorithm 624: Triangulation and
                Interpolation of Arbitrarily Distributed Points in the
                Plane. ACM Trans. Math. Softw. 10 440--442.

                                                                 1
          [4]   Renka R L and Cline A K (1984) A Triangle-based C
                Interpolation Method. Rocky Mountain J. Math. 14 223--237.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: m, the number of data points. Constraint: M >= 3.

           2:  X(M) -- DOUBLE PRECISION array                         Input

           3:  Y(M) -- DOUBLE PRECISION array                         Input

           4:  F(M) -- DOUBLE PRECISION array                         Input
               On entry: the co-ordinates of the rth data point, for
               r=1,2,...,m. The data points are accepted in any order, but
               see Section 8. Constraint: The (x,y) nodes must not all be
               collinear, and each node must be unique.

           5:  TRIANG(7*M) -- INTEGER array                          Output
               On exit: a data structure defining the computed
               triangulation, in a form suitable for passing to E01SBF.

           6:  GRADS(2,M) -- DOUBLE PRECISION array                  Output
               On exit: the estimated partial derivatives at the nodes, in
               a form suitable for passing to E01SBF. The derivatives at
               node r with respect to x and y are contained in GRADS(1,r)
               and GRADS(2,r) respectively, for r=1,2,...,m.

           7:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry M < 3.

          IFAIL= 2
               On entry all the (X,Y) pairs are collinear.

          IFAIL= 3
               On entry (X(i),Y(i)) = (X(j),Y(j)) for some i/=j.

          7. Accuracy

          On successful exit, the computational errors should be negligible
          in most situations but the user should always check the computed
          surface for acceptability, by drawing contours for instance. The
          surface always interpolates the input data exactly.

          8. Further Comments

          The time taken for a call of E01SAF is approximately proportional
          to the number of data points, m. The routine is more efficient
          if, before entry, the values in X, Y, F are arranged so that the
          X array is in ascending order.

          9. Example

          This program reads in a set of 30 data points and calls E01SAF to
          construct an interpolating surface. It then calls E01SBF to
          evaluate the interpolant at a sample of points on a rectangular
          grid.

          Note that this example is not typical of a realistic problem: the
          number of data points would normally be larger, and the
          interpolant would need to be evaluated on a finer grid to obtain
          an accurate plot, say.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01sbf}{NAG On-line Documentation: e01sbf}
\beginscroll
\begin{verbatim}



     E01SBF(3NAG)      Foundation Library (12/10/92)      E01SBF(3NAG)



          E01 -- Interpolation                                       E01SBF
                  E01SBF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01SBF evaluates at a given point the two-dimensional interpolant
          function computed by E01SAF.

          2. Specification

                 SUBROUTINE E01SBF (M, X, Y, F, TRIANG, GRADS, PX, PY, PF,
                1                   IFAIL)
                 INTEGER          M, TRIANG(7*M), IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), GRADS(2,M), PX, PY, PF

          3. Description

          This routine takes as input the parameters defining the
          interpolant F(x,y) of a set of scattered data points (x ,y ,f ),
                                                                 r  r  r
          for r=1,2,...,m, as computed by E01SAF, and evaluates the
          interpolant at the point (px,py).

          If (px,py) is equal to (x ,y ) for some value of r, the returned
                                   r  r
          value will be equal to f .
                                  r

          If (px,py) is not equal to (x ,y ) for any r, the derivatives in
                                       r  r
          GRADS will be used to compute the interpolant. A triangle is
          sought which contains the point (px,py), and the vertices of the
          triangle along with the partial derivatives and f  values at the
                                                           r
          vertices are used to compute the value F(px,py). If the point
          (px,py) lies outside the triangulation defined by the input
          parameters, the returned value is obtained by extrapolation. In
          this case, the interpolating function F is extended linearly
          beyond the triangulation boundary. The method is described in
          more detail in Renka and Cline [2] and the code is derived from
          Renka [1].

          E01SBF must only be called after a call to E01SAF.

          4. References

          [1]   Renka R L (1984) Algorithm 624: Triangulation and
                Interpolation of Arbitrarily Distributed Points in the
                Plane. ACM Trans. Math. Softw. 10 440--442.

                                                                 1
          [2]   Renka R L and Cline A K (1984) A Triangle-based C
                Interpolation Method. Rocky Mountain J. Math. 14 223--237.

          5. Parameters

           1:  M -- INTEGER                                           Input

           2:  X(M) -- DOUBLE PRECISION array                         Input

           3:  Y(M) -- DOUBLE PRECISION array                         Input

           4:  F(M) -- DOUBLE PRECISION array                         Input

           5:  TRIANG(7*M) -- INTEGER array                           Input

           6:  GRADS(2,M) -- DOUBLE PRECISION array                   Input
               On entry: M, X, Y, F, TRIANG and GRADS must be unchanged
               from the previous call of E01SAF.

           7:  PX -- DOUBLE PRECISION                                 Input

           8:  PY -- DOUBLE PRECISION                                 Input
               On entry: the point (px,py) at which the interpolant is to
               be evaluated.

           9:  PF -- DOUBLE PRECISION                                Output
               On exit: the value of the interpolant evaluated at the
               point (px,py).

          10:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry M < 3.

          IFAIL= 2
               On entry the triangulation information held in the array
               TRIANG does not specify a valid triangulation of the data
               points. TRIANG may have been corrupted since the call to
               E01SAF.

          IFAIL= 3
               The evaluation point (PX,PY) lies outside the nodal
               triangulation, and the value returned in PF is computed by
               extrapolation.

          7. Accuracy

          Computational errors should be negligible in most practical
          situations.

          8. Further Comments

          The time taken for a call of E01SBF is approximately proportional
          to the number of data points, m.

          The results returned by this routine are particularly suitable
          for applications such as graph plotting, producing a smooth
          surface from a number of scattered points.

          9. Example

          See the example for E01SAF.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01sef}{NAG On-line Documentation: e01sef}
\beginscroll
\begin{verbatim}



     E01SEF(3NAG)      Foundation Library (12/10/92)      E01SEF(3NAG)



          E01 -- Interpolation                                       E01SEF
                  E01SEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01SEF generates a two-dimensional surface interpolating a set of
          scattered data points, using a modified Shepard method.

          2. Specification

                 SUBROUTINE E01SEF (M, X, Y, F, RNW, RNQ, NW, NQ, FNODES,
                1                   MINNQ, WRK, IFAIL)
                 INTEGER          M, NW, NQ, MINNQ, IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), RNW, RNQ, FNODES(5*M),
                1                 WRK(6*M)

          3. Description

          This routine constructs an interpolating surface F(x,y) through a
          set of m scattered data points (x ,y ,f ), for r=1,2,...,m, using
                                           r  r  r
          a modification of Shepard's method. The surface is continuous and
          has continuous first derivatives.

          The basic Shepard method, described in [2], interpolates the
          input data with the weighted mean

                                        m
                                        --
                                        >  w (x,y)f
                                        --  r      r
                                        r=1
                               F(x,y)= -------------,
                                         m
                                         --
                                         >  w (x,y)
                                         --  r
                                         r=1

                                    1         2       2       2
                     where w (x,y)= --  and  d =(x-x ) +(y-y ) .
                            r        2        r     r       r
                                    d
                                     r

          The basic method is global in that the interpolated value at any
          point depends on all the data, but this routine uses a
          modification due to Franke and Nielson described in [1], whereby
          the method becomes local by adjusting each w (x,y) to be zero
                                                      r
          outside a circle with centre (x ,y ) and some radius R . Also, to
                                         r  r                   w
          improve the performance of the basic method, each f  above is
                                                             r
          replaced by a function f (x,y), which is a quadratic fitted by
                                  r
          weighted least-squares to data local to (x ,y ) and forced to
                                                    r  r
          interpolate (x ,y ,f ). In this context, a point (x,y) is defined
                        r  r  r
          to be local to another point if it lies within some distance R
                                                                        q
          of it. Computation of these quadratics constitutes the main work
          done by this routine. If there are less than 5 other points
          within distance R  from (x ,y ), the quadratic is replaced by a
                           q        r  r
          linear function. In cases of rank-deficiency, the minimum norm
          solution is computed.

          The user may specify values for R  and R , but it is usually
                                           w      q
          easier to choose instead two integers N  and N , from which the
                                                 w      q
          routine will compute R  and R . These integers can be thought of
                                w      q
          as the average numbers of data points lying within distances R
                                                                        w
          and R  respectively from each node. Default values are provided,
               q
          and advice on alternatives is given in Section 8.2.

          The interpolant F(x,y) generated by this routine can subsequently
          be evaluated for any point (x,y) in the domain of the data by a
          call to E01SFF.

          4. References

          [1]   Franke R and Nielson G (1980) Smooth Interpolation of Large
                Sets of Scattered Data. Internat. J. Num. Methods Engrg. 15
                1691--1704.

          [2]   Shepard D (1968) A Two-dimensional Interpolation Function
                for Irregularly Spaced Data. Proc. 23rd Nat. Conf. ACM.
                Brandon/Systems Press Inc., Princeton. 517--523.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: m, the number of data points. Constraint: M >= 3.

           2:  X(M) -- DOUBLE PRECISION array                         Input

           3:  Y(M) -- DOUBLE PRECISION array                         Input

           4:  F(M) -- DOUBLE PRECISION array                         Input
               On entry: the co-ordinates of the rth data point, for
               r=1,2,...,m. The order of the data points is immaterial.
               Constraint: each of the (X(r),Y(r)) pairs must be unique.

           5:  RNW -- DOUBLE PRECISION                         Input/Output

           6:  RNQ -- DOUBLE PRECISION                         Input/Output
               On entry: suitable values for the radii R  and R ,
                                                         w      q
               described in Section 3. Constraint: RNQ <= 0 or 0 < RNW <=
               RNQ. On exit: if RNQ is set less than or equal to zero on
               entry, then default values for both of them will be computed
               from the parameters NW and NQ, and RNW and RNQ will contain
               these values on exit.

           7:  NW -- INTEGER                                          Input

           8:  NQ -- INTEGER                                          Input
               On entry: if RNQ > 0.0 and RNW > 0.0 then NW and NQ are not
               referenced by the routine. Otherwise, NW and NQ must specify
               suitable values for the integers N  and N  described in
                                                 w      q
               Section 3.

               If NQ is less than or equal to zero on entry, then default
               values for both of them, namely NW = 9 and NQ = 18, will be
               used. Constraint: NQ <= 0 or 0 < NW <= NQ.

           9:  FNODES(5*M) -- DOUBLE PRECISION array                 Output
               On exit: the coefficients of the constructed quadratic
               nodal functions. These are in a form suitable for passing to
               E01SFF.

          10:  MINNQ -- INTEGER                                      Output
               On exit: the minimum number of data points that lie within
               radius RNQ of any node, and thus define a nodal function. If
               MINNQ is very small (say, less than 5), then the interpolant
               may be unsatisfactory in regions where the data points are
               sparse.

          11:  WRK(6*M) -- DOUBLE PRECISION array                 Workspace

          12:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry M < 3.

          IFAIL= 2
               On entry RNQ > 0 and either RNW > RNQ or RNW <= 0.

          IFAIL= 3
               On entry NQ > 0 and either NW > NQ or NW <= 0.

          IFAIL= 4
               On entry (X(i),Y(i)) is equal to (X(j),Y(j)) for some i/=j.

          7. Accuracy

          On successful exit, the computational errors should be negligible
          in most situations but the user should always check the computed
          surface for acceptability, by drawing contours for instance. The
          surface always interpolates the input data exactly.

          8. Further Comments

          8.1. Timing

          The time taken for a call of E01SEF is approximately proportional
          to the number of data points, m, provided that N  is of the same
                                                          q
          order as its default value (18). However if N  is increased so
                                                       q
          that the method becomes more global, the time taken becomes
                                         2
          approximately proportional to m .
                            m N} {{\rm w}}$ and ${\rm N} {{\rm q}}$
          8.2. Choice of ${\
          Note first that the radii R  and R , described in Section 3, are
                                     w      q
                                            

                            / N            / N
                       D   /   w      D   /   q
          computed as  -  /   -- and  -  /   -- respectively, where D is
                       2\/    m       2/    m
          the maximum distance between any pair of data points.
          Default values N =9 and N =18 work quite well when the data
                          w        q
          points are fairly uniformly distributed. However, for data having
          some regions with relatively few points or for small data sets
          (m<25), a larger value of N  may be needed. This is to ensure a
                                    w
          reasonable number of data points within a distance R  of each
                                                              w
          node, and to avoid some regions in the data area being left
          outside all the discs of radius R  on which the weights w (x,y)
                                           w                       r
          are non-zero. Maintaining N  approximately equal to 2N  is
                                     q                          w
          usually an advantage.

          Note however that increasing N  and N  does not improve the
                                        w      q
          quality of the interpolant in all cases. It does increase the
          computational cost and makes the method less local.

          9. Example

          This program reads in a set of 30 data points and calls E01SEF to
          construct an interpolating surface. It then calls E01SFF to
          evaluate the interpolant at a sample of points on a rectangular
          grid.

          Note that this example is not typical of a realistic problem: the
          number of data points would normally be larger, and the
          interpolant would need to be evaluated on a finer grid to obtain
          an accurate plot, say.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe01sff}{NAG On-line Documentation: e01sff}
\beginscroll
\begin{verbatim}



     E01SFF(3NAG)      Foundation Library (12/10/92)      E01SFF(3NAG)



          E01 -- Interpolation                                       E01SFF
                  E01SFF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E01SFF evaluates at a given point the two-dimensional
          interpolating function computed by E01SEF.

          2. Specification

                 SUBROUTINE E01SFF (M, X, Y, F, RNW, FNODES, PX, PY, PF,
                1                   IFAIL)
                 INTEGER          M, IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), RNW, FNODES(5*M), PX,
                1                 PY, PF

          3. Description

          This routine takes as input the interpolant F(x,y) of a set of
          scattered data points (x ,y ,f ), for r=1,2,...,m, as computed by
                                  r  r  r
          E01SEF, and evaluates the interpolant at the point (px,py).

          If (px,py) is equal to (x ,y ) for some value of r, the returned
                                   r  r
          value will be equal to f .
                                  r

          If (px,py) is not equal to (x ,y ) for any r, all points that are
                                       r  r
          within distance RNW of (px,py), along with the corresponding
          nodal functions given by FNODES, will be used to compute a value
          of the interpolant.

          E01SFF must only be called after a call to E01SEF.

          4. References

          [1]   Franke R and Nielson G (1980) Smooth Interpolation of Large
                Sets of Scattered Data. Internat. J. Num. Methods Engrg. 15
                1691--1704.

          [2]   Shepard D (1968) A Two-dimensional Interpolation Function
                for Irregularly Spaced Data. Proc. 23rd Nat. Conf. ACM.
                Brandon/Systems Press Inc., Princeton. 517--523.

          5. Parameters

           1:  M -- INTEGER                                           Input

           2:  X(M) -- DOUBLE PRECISION array                         Input

           3:  Y(M) -- DOUBLE PRECISION array                         Input

           4:  F(M) -- DOUBLE PRECISION array                         Input

           5:  RNW -- DOUBLE PRECISION                                Input

           6:  FNODES(5*M) -- DOUBLE PRECISION array                  Input
               On entry: M, X, Y, F, RNW and FNODES must be unchanged from
               the previous call of E01SEF.

           7:  PX -- DOUBLE PRECISION                                 Input

           8:  PY -- DOUBLE PRECISION                                 Input
               On entry: the point (px,py) at which the interpolant is to
               be evaluated.

           9:  PF -- DOUBLE PRECISION                                Output
               On exit: the value of the interpolant evaluated at the
               point (px,py).

          10:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry M < 3.

          IFAIL= 2
               The interpolant cannot be evaluated because the evaluation
               point (PX,PY) lies outside the support region of the data
               supplied in X, Y and F. This error exit will occur if
               (PX,PY) lies at a distance greater than or equal to RNW from
               every point given by arrays X and Y.

               The value 0.0 is returned in PF. This value will not provide
               continuity with values obtained at other points (PX,PY),
               i.e., values obtained when IFAIL = 0 on exit.

          7. Accuracy

          Computational errors should be negligible in most practical
          situations.

          8. Further Comments

          The time taken for a call of E01SFF is approximately proportional
          to the number of data points, m.

          The results returned by this routine are particularly suitable
          for applications such as graph plotting, producing a smooth
          surface from a number of scattered points.

          9. Example

          See the example for E01SEF.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02}{NAG On-line Documentation: e02}
\beginscroll
\begin{verbatim}



     E02(3NAG)         Foundation Library (12/10/92)         E02(3NAG)



          E02 -- Curve and Surface Fitting              Introduction -- E02
                                    Chapter E02
                             Curve and Surface Fitting

          Contents of this Introduction:

          1.     Scope of the Chapter

          2.     Background to the Problems

          2.1.   Preliminary Considerations

          2.1.1. Fitting criteria: norms

          2.1.2. Weighting of data points

          2.2.   Curve Fitting

          2.2.1. Representation of polynomials

          2.2.2. Representation of cubic splines

          2.3.   Surface Fitting

          2.3.1. Bicubic splines: definition and representation

          2.4.   General Linear and Nonlinear Fitting Functions

          2.5.   Constrained Problems

          2.6.   References

          3.     Recommendations on Choice and Use of Routines

          3.1.   General

          3.1.1. Data considerations

          3.1.2. Transformation of variables

          3.2.   Polynomial Curves

          3.2.1. Least-squares polynomials: arbitrary data points

          3.2.2. Least-squares polynomials: selected data points

          3.3.   Cubic Spline Curves

          3.3.1. Least-squares cubic splines

          3.3.2. Automatic fitting with cubic splines

          3.4.   Spline Surfaces

          3.4.1. Least-squares bicubic splines

          3.4.2. Automatic fitting with bicubic splines

          3.5.   General Linear and Nonlinear Fitting Functions

          3.5.1. General linear functions

          3.5.2. Nonlinear functions

          3.6.   Constraints

          3.7.   Evaluation, Differentiation and Integration

          3.8.   Index



          1. Scope of the Chapter

          The main aim of this chapter is to assist the user in finding a
          function which approximates a set of data points. Typically the
          data contain random errors, as of experimental measurement, which
          need to be smoothed out. To seek an approximation to the data, it
          is first necessary to specify for the approximating function a
          mathematical form (a polynomial, for example) which contains a
          number of unspecified coefficients: the appropriate fitting
          routine then derives for the coefficients the values which
          provide the best fit of that particular form. The chapter deals
          mainly with curve and surface fitting (i.e., fitting with
          functions of one and of two variables) when a polynomial or a
          cubic spline is used as the fitting function, since these cover
          the most common needs. However, fitting with other functions
          and/or more variables can be undertaken by means of general
          linear or nonlinear routines (some of which are contained in
          other chapters) depending on whether the coefficients in the
          function occur linearly or nonlinearly. Cases where a graph
          rather than a set of data points is given can be treated simply
          by first reading a suitable set of points from the graph.

          The chapter also contains routines for evaluating,
          differentiating and integrating polynomial and spline curves and
          surfaces, once the numerical values of their coefficients have
          been determined.

          2. Background to the Problems

          2.1. Preliminary Considerations

          In the curve-fitting problems considered in this chapter, we have
          a dependent variable y and an independent variable x, and we are
          given a set of data points (x ,y ), for r=1,2,...,m. The
                                       r  r
          preliminary matters to be considered in this section will, for
          simplicity, be discussed in this context of curve-fitting
          problems. In fact, however, these considerations apply equally
          well to surface and higher-dimensional problems. Indeed, the
          discussion presented carries over essentially as it stands if,
          for these cases, we interpret x as a vector of several
          independent variables and correspondingly each x  as a vector
                                                          r
          containing the rth data value of each independent variable.

          We wish, then, to approximate the set of data points as closely
          as possible with a specified function, f(x) say, which is as
          smooth as possible -- f(x) may, for example, be a polynomial. The
          requirements of smoothness and closeness conflict, however, and a
          balance has to be struck between them. Most often, the smoothness
          requirement is met simply by limiting the number of coefficients
          allowed in the fitting function -- for example, by restricting
          the degree in the case of a polynomial. Given a particular number
          of coefficients in the function in question, the fitting routines
          of this chapter determine the values of the coefficients such
          that the 'distance' of the function from the data points is as
          small as possible. The necessary balance is struck by the user
          comparing a selection of such fits having different numbers of
          coefficients. If the number of coefficients is too low, the
          approximation to the data will be poor. If the number is too
          high, the fit will be too close to the data, essentially
          following the random errors and tending to have unwanted
          fluctuations between the data points. Between these extremes,
          there is often a group of fits all similarly close to the data
          points and then, particularly when least-squares polynomials are
          used, the choice is clear: it is the fit from this group having
          the smallest number of coefficients.

          The above process can be seen as the user minimizing the
          smoothness measure (i.e., the number of coefficients) subject to
          the distance from the data points being acceptably small. Some of
          the routines, however, do this task themselves. They use a
          different measure of smoothness (in each case one that is
          continuous) and minimize it subject to the distance being less
          than a threshold specified by the user. This is a much more
          automatic process, requiring only some experimentation with the
          threshold.

          2.1.1.  Fitting criteria: norms

          A measure of the above 'distance' between the set of data points
          and the function f(x) is needed. The distance from a single data
          point (x ,y ) to the function can simply be taken as
                  r  r

                              (epsilon) =y -f(x ),                      (1)
                                       r  r    r

          and is called the residual of the point. (With this definition,
          the residual is regarded as a function of the coefficients
          contained in f(x); however, the term is also used to mean the
          particular value of (epsilon)  which corresponds to the fitted
                                       r
          values of the coefficients.) However, we need a measure of
          distance for the set of data points as a whole. Three different
          measures are used in the different routines (which measure to
          select, according to circumstances, is discussed later in this
          sub-section). With (epsilon)  defined in (1), these measures, or
                                      r
          norms, are

                                m
                                --
                                >  |(epsilon) |,                        (2)
                                --           r
                                r=1

                                 

                                / m
                               /  --          2
                              /   >  (epsilon) ,  and                   (3)
                             /    --          r
                           \/     r=1

                               max |(epsilon) |,                        (4)
                                  r          r

          respectively the l  norm, the l  norm and the l      norm.
                            1            2               infty

          Minimization of one or other of these norms usually provides the
          fitting criterion, the minimization being carried out with
          respect to the coefficients in the mathematical form used for
          f(x): with respect to the b  for example if the mathematical form
                                     i
          is the power series in (8) below. The fit which results from
          minimizing (2) is known as the l  fit, or the fit in the l  norm:
                                          1                         1
          that which results from minimizing (3) is the l  fit, the well-
                                                         2
          known least-squares fit (minimizing (3) is equivalent to
          minimizing the square of (3), i.e., the sum of squares of
          residuals, and it is the latter which is used in practice), and
          that from minimizing (4) is the l     , or minimax, fit.
                                           infty

          Strictly speaking, implicit in the use of the above norms are the
          statistical assumptions that the random errors in the y  are
                                                                 r
          independent of one another and that any errors in the x  are
                                                                 r
          negligible by comparison. From this point of view, the use of the
          l  norm is appropriate when the random errors in the y  have a
           2                                                    r
          normal distribution, and the l      norm is appropriate when they
                                        infty
          have a rectangular distribution, as when fitting a table of
          values rounded to a fixed number of decimal places. The l  norm
                                                                   1
          is appropriate when the error distribution has its frequency
          function proportional to the negative exponential of the modulus
          of the normalised error -- not a common situation.

          However, the user is often indifferent to these statistical
          considerations, and simply seeks a fit which he can assess by
          inspection, perhaps visually from a graph of the results. In this
          event, the l  norm is particularly appropriate when the data are
                      1
          thought to contain some 'wild' points (since fitting in this norm
          tends to be unaffected by the presence of a small number of such
          points), though of course in simple situations the user may
          prefer to identify and reject these points. The l      norm
                                                           infty
          should be used only when the maximum residual is of particular
          concern, as may be the case for example when the data values have
          been obtained by accurate computation, as of a mathematical
          function. Generally, however, a routine based on least-squares
          should be preferred, as being computationally faster and usually
          providing more information on which to assess the results. In
          many problems the three fits will not differ significantly for
          practical purposes.

          Some of the routines based on the l  norm do not minimize the
                                             2
          norm itself but instead minimize some (intuitively acceptable)
          measure of smoothness subject to the norm being less than a user-
          specified threshold. These routines fit with cubic or bicubic
          splines (see (10) and (14) below) and the smoothing measures
          relate to the size of the discontinuities in their third
          derivatives. A much more automatic fitting procedure follows from
          this approach.

          2.1.2.  Weighting of data points

          The use of the above norms also assumes that the data values y
                                                                        r
          are of equal (absolute) accuracy. Some of the routines enable an
          allowance to be made to take account of differing accuracies. The
          allowance takes the form of 'weights' applied to the y-values so
          that those values known to be more accurate have a greater
          influence on the fit than others. These weights, to be supplied
          by the user, should be calculated from estimates of the absolute
          accuracies of the y-values, these estimates being expressed as
          standard deviations, probable errors or some other measure which
          has the same dimensions as y. Specifically, for each y  the
                                                                r
          corresponding weight w  should be inversely proportional to the
                                r
          accuracy estimate of y . For example, if the percentage accuracy
                                r
          is the same for all y , then the absolute accuracy of y  is
                               r                                 r
          proportional to y  (assuming y  to be positive, as it usually is
                           r            r
          in such cases) and so w =K/y , for r=1,2,...,m, for an arbitrary
                                 r    r
          positive constant K. (This definition of weight is stressed
          because often weight is defined as the square of that used here.)
          The norms (2), (3) and (4) above are then replaced respectively
          by

                               m
                               --
                               >  |w (epsilon) |,                       (5)
                               --   r         r
                               r=1

                                

                               / m
                              /  --  2         2
                             /   >  w (epsilon) ,  and                  (6)
                            /    --  r         r
                          \/     r=1

                              max |w (epsilon) |.                       (7)
                                 r  r         r

          Again it is the square of (6) which is used in practice rather
          than (6) itself.

          2.2. Curve Fitting

          When, as is commonly the case, the mathematical form of the
          fitting function is immaterial to the problem, polynomials and
          cubic splines are to be preferred because their simplicity and
          ease of handling confer substantial benefits. The cubic spline is
          the more versatile of the two. It consists of a number of cubic
          polynomial segments joined end to end with continuity in first
          and second derivatives at the joins. The third derivative at the
          joins is in general discontinuous. The x-values of the joins are
          called knots, or, more precisely, interior knots. Their number
          determines the number of coefficients in the spline, just as the
          degree determines the number of coefficients in a polynomial.

          2.2.1.  Representation of polynomials

          Rather than using the power-series form

                                           2        k
                           f(x)==b +b x+b x +...+b x                    (8)
                                  0  1   2        k

          to represent a polynomial, the routines in this chapter use the
          Chebyshev series form

                         1
                  f(x)== -a T (x)+a T (x)+a T (x)+...+a T (x),          (9)
                         2 0 0     1 1     2 2         k k

          where T (x) is the Chebyshev polynomial of the first kind of
                 i
          degree i (see Cox and Hayes [1], page 9), and where the range of
          x has been normalised to run from -1 to +1. The use of either
          form leads theoretically to the same fitted polynomial, but in
          practice results may differ substantially because of the effects
          of rounding error. The Chebyshev form is to be preferred, since
          it leads to much better accuracy in general, both in the
          computation of the coefficients and in the subsequent evaluation
          of the fitted polynomial at specified points. This form also has
          other advantages: for example, since the later terms in (9)
          generally decrease much more rapidly from left to right than do
          those in (8), the situation is more often encountered where the
          last terms are negligible and it is obvious that the degree of
          the polynomial can be reduced (note that on the interval -1<=x<=1
          for all i, T (x) attains the value unity but never exceeds it, so
                      i
          that the coefficient a  gives directly the maximum value of the
                                i
          term containing it).

          2.2.2.  Representation of cubic splines

          A cubic spline is represented in the form

                      f(x)==c N (x)+c N (x)+...+c N (x),               (10)
                             1 1     2 2         p p

          where N (x), for i=1,2,...,p, is a normalised cubic B-spline (see
                 i
          Hayes [2]). This form, also, has advantages of computational
          speed and accuracy over alternative representations.

          2.3. Surface Fitting

          There are now two independent variables, and we shall denote
          these by x and y. The dependent variable, which was denoted by y
          in the curve-fitting case, will now be denoted by f. (This is a
          rather different notation from that indicated for the general-
          dimensional problem in the first paragraph of Section 2.1 , but
          it has some advantages in presentation.)

          Again, in the absence of contrary indications in the particular
          application being considered, polynomials and splines are the
          approximating functions most commonly used. Only splines are used
          by the surface-fitting routines in this chapter.

          2.3.1.  Bicubic splines: definition and representation

          The bicubic spline is defined over a rectangle R in the (x,y)
          plane, the sides of R being parallel to the x- and y-axes. R is
          divided into rectangular panels, again by lines parallel to the
          axes. Over each panel the bicubic spline is a bicubic polynomial,
          that is it takes the form

                                3   3
                                --  --     i j
                                >   >  a  x y .                        (13)
                                --  --  ij
                                i=0 j=0

          Each of these polynomials joins the polynomials in adjacent
          panels with continuity up to the second derivative. The constant
          x-values of the dividing lines parallel to the y-axis form the
          set of interior knots for the variable x, corresponding precisely
          to the set of interior knots of a cubic spline. Similarly, the
          constant y-values of dividing lines parallel to the x-axis form
          the set of interior knots for the variable y. Instead of
          representing the bicubic spline in terms of the above set of
          bicubic polynomials, however, it is represented, for the sake of
          computational speed and accuracy, in the form

                                 p   q
                                 --  --
                         f(x,y)= >   >  c  M (x)N (y),                 (14)
                                 --  --  ij i    j
                                 i=1 j=1

          where M (x), for i=1,2,...,p, and N (y), for j=1,2,...,q, are
                 i                           j
          normalised B-splines (see Hayes and Halliday [4] for further
          details of bicubic splines and Hayes [2] for normalised B-
          splines).

          2.4. General Linear and Nonlinear Fitting Functions

          We have indicated earlier that, unless the data-fitting
          application under consideration specifically requires some other
          type of fitting function, a polynomial or a spline is usually to
          be preferred. Special routines for these functions, in one and in
          two variables, are provided in this chapter. When the application
          does specify some other fitting function, however, it may be
          treated by a routine which deals with a general linear function,
          or by one for a general nonlinear function, depending on whether
          the coefficients in the given function occur linearly or
          nonlinearly.

          The general linear fitting function can be written in the form

                f(x)==c (phi) (x)+c (phi) (x)+...+c (phi) (x),         (15)
                       1     1     2     2         p     p

          where x is a vector of one or more independent variables, and the
          (phi)  are any given functions of these variables (though they
               i
          must be linearly independent of one another if there is to be the
          possibility of a unique solution to the fitting problem). This is
          not intended to imply that each (phi)  is necessarily a function
                                               i
          of all the variables: we may have, for example, that each (phi)
                                                                         i
          is a function of a different single variable, and even that one
          of the (phi)  is a constant. All that is required is that a value
                      i
          of each (phi) (x) can be computed when a value of each
                       i
          independent variable is given.

          When the fitting function f(x) is not linear in its coefficients,
          no more specific representation is available in general than f(x)
          itself. However, we shall find it helpful later on to indicate
          the fact that f(x) contains a number of coefficients (to be
          determined by the fitting process) by using instead the notation
          f(x;c), where c denotes the vector of coefficients. An example of
          a nonlinear fitting function is


                      f(x;c)==c +c exp(-c x)+c exp(-c x),              (16)
                               1  2      4    3      5

          which is in one variable and contains five coefficients. Note
          that here, as elsewhere in this Chapter Introduction, we use the
          term 'coefficients' to include all the quantities whose values
          are to be determined by the fitting process, not just those which
          occur linearly. We may observe that it is only the presence of
          the coefficients c  and c  which makes the form (16) nonlinear.
                            4      5
          If the values of these two coefficients were known beforehand,
          (16) would instead be a linear function which, in terms of the
          general linear form (15), has p=3 and

           (phi) (x)==1,  (phi) (x)==exp(-c x), and (phi) (x)==exp(-c x).
                1              2           4             3           5

          We may note also that polynomials and splines, such as (9) and
          (14), are themselves linear in their coefficients. Thus if, when
          fitting with these functions, a suitable special routine is not
          available (as when more than two independent variables are
          involved or when fitting in the l  norm), it is appropriate to
                                           1
          use a routine designed for a general linear function.

          2.5. Constrained Problems

          So far, we have considered only fitting processes in which the
          values of the coefficients in the fitting function are determined
          by an unconstrained minimization of a particular norm. Some
          fitting problems, however, require that further restrictions be
          placed on the determination of the coefficient values. Sometimes
          these restrictions are contained explicitly in the formulation of
          the problem in the form of equalities or inequalities which the
          coefficients, or some function of them, must satisfy. For
          example, if the fitting function contains a term Aexp(-kx), it
          may be required that k>=0. Often, however, the equality or
          inequality constraints relate to the value of the fitting
          function or its derivatives at specified values of the
          independent variable(s), but these too can be expressed in terms
          of the coefficients of the fitting function, and it is
          appropriate to do this if a general linear or nonlinear routine
          is being used. For example, if the fitting function is that given
          in (10), the requirement that the first derivative of the
          function at x=x  be non-negative can be expressed as
                         0

                     c N '(x )+c N '(x )+...+c N '(x )>=0,             (17)
                      1 1   0   2 2   0       p p   0

          where the prime denotes differentiation with respect to x and
          each derivative is evaluated at x=x . On the other hand, if the
                                             0
          requirement had been that the derivative at x=x  be exactly zero,
                                                         0
          the inequality sign in (17) would be replaced by an equality.

          Routines which provide a facility for minimizing the appropriate
          norm subject to such constraints are discussed in Section 3.6.

          2.6. References

          [1]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [2]   Hayes J G (1974 ) Numerical Methods for Curve and Surface
                Fitting. Bull Inst Math Appl. 10 144--152.

                (For definition of normalised B-splines and details of
                numerical methods.)

          [3]   Hayes J G (1970) Curve Fitting by Polynomials in One
                Variable. Numerical Approximation to Functions and Data. (ed
                J G Hayes) Athlone Press, London.

          [4]   Hayes J G and Halliday J (1974) The Least-squares Fitting of
                Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
                Appl. 14 89--103.

          3. Recommendations on Choice and Use of Routines

          3.1. General

          The choice of a routine to treat a particular fitting problem
          will depend first of all on the fitting function and the norm to
          be used. Unless there is good reason to the contrary, the fitting
          function should be a polynomial or a cubic spline (in the
          appropriate number of variables) and the norm should be the l
                                                                       2
          norm (leading to the least-squares fit). If some other function
          is to be used, the choice of routine will depend on whether the
          function is nonlinear (in which case see Section 3.5.2) or linear
          in its coefficients (see Section 3.5.1), and, in the latter case,
          on whether the l  or l  norm is to be used. The latter section is
                          1     2
          appropriate for polynomials and splines, too, if the l  norm is
                                                                1
          preferred.

          In the case of a polynomial or cubic spline, if there is only one
          independent variable, the user should choose a spline (Section
          3.3) when the curve represented by the data is of complicated
          form, perhaps with several peaks and troughs. When the curve is
          of simple form, first try a polynomial (see Section 3.2) of low
          degree, say up to degree 5 or 6, and then a spline if the
          polynomial fails to provide a satisfactory fit. (Of course, if
          third-derivative discontinuities are unacceptable to the user, a
          polynomial is the only choice.) If the problem is one of surface
          fitting, one of the spline routines should be used (Section 3.4).
          If the problem has more than two independent variables, it may be
          treated by the general linear routine in Section 3.5.1, again
          using a polynomial in the first instance.

          Another factor which affects the choice of routine is the
          presence of constraints, as previously discussed in Section 2.5.
          Indeed this factor is likely to be overriding at present, because
          of the limited number of routines which have the necessary
          facility. See Section 3.6.

          3.1.1.  Data considerations

          A satisfactory fit cannot be expected by any means if the number
          and arrangement of the data points do not adequately represent
          the character of the underlying relationship: sharp changes in
          behaviour, in particular, such as sharp peaks, should be well
          covered. Data points should extend over the whole range of
          interest of the independent variable(s): extrapolation outside
          the data ranges is most unwise. Then, with polynomials, it is
          advantageous to have additional points near the ends of the
          ranges, to counteract the tendency of polynomials to develop
          fluctuations in these regions. When, with polynomial curves, the
          user can precisely choose the x-values of the data, the special
          points defined in Section 3.2.2 should be selected. With splines
          the choice is less critical as long as the character of the
          relationship is adequately represented. All fits should be tested
          graphically before accepting them as satisfactory.

          For this purpose it should be noted that it is not sufficient to
          plot the values of the fitted function only at the data values of
          the independent variable(s); at the least, its values at a
          similar number of intermediate points should also be plotted, as
          unwanted fluctuations may otherwise go undetected. Such
          fluctuations are the less likely to occur the lower the number of
          coefficients chosen in the fitting function. No firm guide can be
          given, but as a rough rule, at least initially, the number of
          coefficients should not exceed half the number of data points
          (points with equal or nearly equal values of the independent
          variable, or both independent variables in surface fitting,
          counting as a single point for this purpose). However, the
          situation may be such, particularly with a small number of data
          points, that a satisfactorily close fit to the data cannot be
          achieved without unwanted fluctuations occurring. In such cases,
          it is often possible to improve the situation by a transformation
          of one or more of the variables, as discussed in the next
          paragraph: otherwise it will be necessary to provide extra data
          points. Further advice on curve fitting is given in Cox and Hayes
          [1] and, for polynomials only, in Hayes [3] of Section 2.7. Much
          of the advice applies also to surface fitting; see also the
          Routine Documents.

          3.1.2.  Transformation of variables

          Before starting the fitting, consideration should be given to the
          choice of a good form in which to deal with each of the
          variables: often it will be satisfactory to use the variables as
          they stand, but sometimes the use of the logarithm, square root,
          or some other function of a variable will lead to a better-
          behaved relationship. This question is customarily taken into
          account in preparing graphs and tables of a relationship and the
          same considerations apply when curve or surface fitting. The
          practical context will often give a guide. In general, it is best
          to avoid having to deal with a relationship whose behaviour in
          one region is radically different from that in another. A steep
          rise at the left-hand end of a curve, for example, can often best
          be treated by curve fitting in terms of log(x+c) with some
          suitable value of the constant c. A case when such a
          transformation gave substantial benefit is discussed in Hayes [3]
          page 60. According to the features exhibited in any particular
          case, transformation of either dependent variable or independent
          variable(s) or both may be beneficial. When there is a choice it
          is usually better to transform the independent variable(s): if
          the dependent variable is transformed, the weights attached to
          the data points must be adjusted. Thus (denoting the dependent
          variable by y, as in the notation for curves) if the y  to be
                                                                r
          fitted have been obtained by a transformation y=g(Y) from
          original data values Y , with weights W , for r=1,2,...,m, we
                                r                r
          must take

                                w =W /(dy/dY),                         (18)
                                 r  r

          where the derivative is evaluated at Y . Strictly, the
                                                r
          transformation of Y and the adjustment of weights are valid only
          when the data errors in the Y  are small compared with the range
                                       r
          spanned by the Y , but this is usually the case.
                          r

          3.2. Polynomial Curves

          3.2.1.  Least-squares polynomials: arbitrary data points

          E02ADF fits to arbitrary data points, with arbitrary weights,
          polynomials of all degrees up to a maximum degree k, which is at
          choice. If the user is seeking only a low degree polynomial, up
          to degree 5 or 6 say, k=10 is an appropriate value, providing
          there are about 20 data points or more. To assist in deciding the
          degree of polynomial which satisfactorily fits the data, the
          routine provides the root-mean-square-residual s  for all degrees
                                                          i
          i=1,2,...,k. In a satisfactory case, these s  will decrease
                                                      i
          steadily as i increases and then settle down to a fairly constant
          value, as shown in the example

                  i    s
                        i

                  0    3.5215

                  1    0.7708

                  2    0.1861

                  3    0.0820

                  4    0.0554

                  5    0.0251

                  6    0.0264

                  7    0.0280

                  8    0.0277

                  9    0.0297

                  10   0.0271

          If the s  values settle down in this way, it indicates that the
                  i
          closest polynomial approximation justified by the data has been
          achieved. The degree which first gives the approximately constant
          value of s  (degree 5 in the example) is the appropriate degree
                    i
          to select. (Users who are prepared to accept a fit higher than
          sixth degree, should simply find a high enough value of k to
          enable the type of behaviour indicated by the example to be
          detected: thus they should seek values of k for which at least 4
          or 5 consecutive values of s  are approximately the same.) If the
                                      i
          degree were allowed to go high enough, s  would, in most cases,
                                                  i
          eventually start to decrease again, indicating that the data
          points are being fitted too closely and that undesirable
          fluctuations are developing between the points. In some cases,
          particularly with a small number of data points, this final
          decrease is not distinguishable from the initial decrease in s .
                                                                        i
          In such cases, users may seek an acceptable fit by examining the
          graphs of several of the polynomials obtained. Failing this, they
          may (a) seek a transformation of variables which improves the
          behaviour, (b) try fitting a spline, or (c) provide more data
          points. If data can be provided simply by drawing an
          approximating curve by hand and reading points from it, use the
          points discussed in Section 3.2.2.

          3.2.2.  Least-squares polynomials: selected data points

          When users are at liberty to choose the x-values of data points,
          such as when the points are taken from a graph, it is most
          advantageous when fitting with polynomials to use the values
          x =cos((pi)r/n), for r=0,1,...,n for some value of n, a suitable
           r
          value for which is discussed at the end of this section. Note
          that these x  relate to the variable x after it has been
                      r
          normalised so that its range of interest is -1 to +1. E02ADF may
          then be used as in Section 3.2.1 to seek a satisfactory fit.

          3.3. Cubic Spline Curves

          3.3.1.  Least-squares cubic splines

          E02BAF fits to arbitrary data points, with arbitrary weights, a
          cubic spline with interior knots specified by the user. The
          choice of these knots so as to give an acceptable fit must
          largely be a matter of trial and error, though with a little
          experience a satisfactory choice can often be made after one or
          two trials. It is usually best to start with a small number of
          knots (too many will result in unwanted fluctuations in the fit,
          or even in there being no unique solution) and, examining the fit
          graphically at each stage, to add a few knots at a time at places
          where the fit is particularly poor. Moving the existing knots
          towards these places will also often improve the fit. In regions
          where the behaviour of the curve underlying the data is changing
          rapidly, closer knots will be needed than elsewhere. Otherwise,
          positioning is not usually very critical and equally-spaced knots
          are often satisfactory. See also the next section, however.

          A useful feature of the routine is that it can be used in
          applications which require the continuity to be less than the
          normal continuity of the cubic spline. For example, the fit may
          be required to have a discontinuous slope at some point in the
          range. This can be achieved by placing three coincident knots at
          the given point. Similarly a discontinuity in the second
          derivative at a point can be achieved by placing two knots there.
          Analogy with these discontinuous cases can provide guidance in
          more usual cases: for example, just as three coincident knots can
          produce a discontinuity in slope, so three close knots can
          produce a rapid change in slope. The closer the knots are, the
          more rapid can the change be.

                                     Figure 1
                   Please see figure in printed Reference Manual

          An example set of data is given in Figure 1. It is a rather
          tricky set, because of the scarcity of data on the right, but it
          will serve to illustrate some of the above points and to show
          some of the dangers to be avoided. Three interior knots
          (indicated by the vertical lines at the top of the diagram) are
          chosen as a start. We see that the resulting curve is not steep
          enough in the middle and fluctuates at both ends, severely on the
          right. The spline is unable to cope with the shape and more knots
          are needed.

          In Figure 2, three knots have been added in the centre, where the
          data shows a rapid change in behaviour, and one further out at
          each end, where the fit is poor. The fit is still poor, so a
          further knot is added in this region and, in Figure 3, disaster
          ensues in rather spectacular fashion.

                                     Figure 2
                   Please see figure in printed Reference Manual

                                     Figure 3
                   Please see figure in printed Reference Manual

          The reason is that, at the right-hand end, the fits in Figure 1
          and Figure 2 have been interpreted as poor simply because of the
          fluctuations about the curve underlying the data (or what it is
          naturally assumed to be). But the fitting process knows only
          about the data and nothing else about the underlying curve, so it
          is important to consider only closeness to the data when deciding
          goodness of fit.

          Thus, in Figure 1, the curve fits the last two data points quite
          well compared with the fit elsewhere, so no knot should have been
          added in this region. In Figure 2, the curve goes exactly through
          the last two points, so a further knot is certainly not needed
          here.


                                     Figure 4
                   Please see figure in printed Reference Manual

          Figure 4 shows what can be achieved without the extra knot on
          each of the flat regions. Remembering that within each knot
          interval the spline is a cubic polynomial, there is really no
          need to have more than one knot interval covering each flat
          region.

          What we have, in fact, in Figure 2 and Figure 3 is a case of too
          many knots (so too many coefficients in the spline equation) for
          the number of data points. The warning in the second paragraph of
          Section 2.1 was that the fit will then be too close to the data,
          tending to have unwanted fluctuations between the data points.
          The warning applies locally for splines, in the sense that, in
          localities where there are plenty of data points, there can be a
          lot of knots, as long as there are few knots where there are few
          points, especially near the ends of the interval. In the present
          example, with so few data points on the right, just the one extra
          knot in Figure 2 is too many! The signs are clearly present, with
          the last two points fitted exactly (at least to the graphical
          accuracy and actually much closer than that) and fluctuations
          within the last two knot-intervals (cf. Figure 1, where only the
          final point is fitted exactly and one of the wobbles spans
          several data points).

          The situation in Figure 3 is different. The fit, if computed
          exactly, would still pass through the last two data points, with
          even more violent fluctuations. However, the problem has become
          so ill-conditioned that all accuracy has been lost. Indeed, if
          the last interior knot were moved a tiny amount to the right,
          there would be no unique solution and an error message would have
          been caused. Near-singularity is, sadly, not picked up by the
          routine, but can be spotted readily in a graph, as Figure 3. B-
          spline coefficients becoming large, with alternating signs, is
          another indication. However, it is better to avoid such
          situations, firstly by providing, whenever possible, data
          adequately covering the range of interest, and secondly by
          placing knots only where there is a reasonable amount of data.

          The example here could, in fact, have utilised from the start the
          observation made in the second paragraph of this section, that
          three close knots can produce a rapid change in slope. The
          example has two such rapid changes and so requires two sets of
          three close knots (in fact, the two sets can be so close that one
          knot can serve in both sets, so only five knots prove sufficient
          in Figure 4). It should be noted, however, that the rapid turn
          occurs within the range spanned by the three knots. This is the
          reason that the six knots in Figure 2 are not satisfactory as
          they do not quite span the two turns.

          Some more examples to illustrate the choice of knots are given in
          Cox and Hayes [1].

          3.3.2.  Automatic fitting with cubic splines

          E02BEF also fits cubic splines to arbitrary data points with
          arbitrary weights but itself chooses the number and positions of
          the knots. The user has to supply only a threshold for the sum of
          squares of residuals. The routine first builds up a knot set by a
          series of trial fits in the l  norm. Then, with the knot set
                                       2
          decided, the final spline is computed to minimize a certain
          smoothing measure subject to satisfaction of the chosen
          threshold. Thus it is easier to use than E02BAF (see previous
          section), requiring only some experimentation with this
          threshold. It should therefore be first choice unless the user
          has a preference for the ordinary least-squares fit or, for
          example, wishes to experiment with knot positions, trying to keep
          their number down (E02BEF aims only to be reasonably frugal with
          knots).

          3.4. Spline Surfaces

          3.4.1.  Least-squares bicubic splines

          E02DAF fits to arbitrary data points, with arbitrary weights, a
          bicubic spline with its two sets of interior knots specified by
          the user. For choosing these knots, the advice given for cubic
          splines, in Section 3.3.1 above, applies here too. (See also the
          next section, however.) If changes in the behaviour of the
          surface underlying the data are more marked in the direction of
          one variable than of the other, more knots will be needed for the
          former variable than the latter. Note also that, in the surface
          case, the reduction in continuity caused by coincident knots will
          extend across the whole spline surface: for example, if three
          knots associated with the variable x are chosen to coincide at a
          value L, the spline surface will have a discontinuous slope
          across the whole extent of the line x=L.

          With some sets of data and some choices of knots, the least-
          squares bicubic spline will not be unique. This will not occur,
          with a reasonable choice of knots, if the rectangle R is well
          covered with data points: here R is defined as the smallest
          rectangle in the (x,y) plane, with sides parallel to the axes,
          which contains all the data points. Where the least-squares
          solution is not unique, the minimal least-squares solution is
          computed, namely that least-squares solution which has the
          smallest value of the sum of squares of the B-spline coefficients
          c   (see the end of Section 2.3.2 above). This choice of least-
           ij
          squares solution tends to minimize the risk of unwanted
          fluctuations in the fit. The fit will not be reliable, however,
          in regions where there are few or no data points.

          3.4.2.  Automatic fitting with bicubic splines

          E02DDF also fits bicubic splines to arbitrary data points with
          arbitrary weights but chooses the knot sets itself. The user has
          to supply only a threshold for the sum of squares of residuals.
          Just like the automatic curve E02BEF (Section 3.3.2), E02DDF then
          builds up the knot sets and finally fits a spline minimizing a
          smoothing measure subject to satisfaction of the threshold.
          Again, this easier to use routine is normally to be preferred, at
          least in the first instance.

          E02DCF is a very similar routine to E02DDF but deals with data
          points of equal weight which lie on a rectangular mesh in the
          (x,y) plane. This kind of data allows a very much faster
          computation and so is to be preferred when applicable.
          Substantial departures from equal weighting can be ignored if the
          user is not concerned with statistical questions, though the
          quality of the fit will suffer if this is taken too far. In such
          cases, the user should revert to E02DDF.

          3.5. General Linear and Nonlinear Fitting Functions

          3.5.1.  General linear functions

          For the general linear function (15), routines are available for
          fitting in the l  and l  norms. The least-squares routines (which
                          1      2
          are to be preferred unless there is good reason to use another
          norm -- see Section 2.1.1) are in Chapter F04. The l  routine is
                                                              1
          E02GAF.

          All the above routines are essentially linear algebra routines,
          and in considering their use we need to view the fitting process
          in a slightly different way from hitherto. Taking y to be the
          dependent variable and x the vector of independent variables, we
          have, as for equation (1) but with each x  now a vector,
                                                   r

                          (epsilon) =y -f(x )  r=1,2,...,m.
                                   r  r    r

          Substituting for f(x) the general linear form (15), we can write
          this as

          c (phi) (x )+c (phi) (x )+...+c (phi) (x )=y -(epsilon) ,
           1     1  r   2     2  r       p     p  r   r          r
           r=1,2,...,m                                                 (19)


          Thus we have a system of linear equations in the coefficients c .
                                                                         j
          Usually, in writing these equations, the (epsilon)  are omitted
                                                            r
          and simply taken as implied. The system of equations is then
          described as an overdetermined system (since we must have m>=p if
          there is to be the possibility of a unique solution to our
          fitting problem), and the fitting process of computing the c  to
                                                                      j
          minimize one or other of the norms (2), (3) and (4) can be
          described, in relation to the system of equations, as solving the
          overdetermined system in that particular norm. In matrix
          notation, the system can be written as

                                   (Phi)c=y,                           (20)

          where (Phi) is the m by p matrix whose element in row r and
          column j is (phi) (x ), for r=1,2,...,m; j=1,2,...,p. The vectors
                           j  r
          c and y respectively contain the coefficients c  and the data
                                                         j
          values y .
                  r

          The routines, however, use the standard notation of linear
          algebra, the overdetermined system of equations being denoted by

                                     Ax=b                              (21)

          The correspondence between this notation and that which we have
          used for the data-fitting problem (equation (20)) is therefore
          given by

                            A==(Phi),   x==c   b==y                    (22)

          Note that the norms used by these routines are the unweighted
          norms (2) and (3). If the user wishes to apply weights to the
          data points, that is to use the norms (5) or (6), the
          equivalences (22) should be replaced by

                              A==D(Phi),   x==c   b==Dy

          where D is a diagonal matrix with w  as the rth diagonal element.
                                             r
          Here w , for r=1,2,...,m, is the weight of the rth data point as
                r
          defined in Section 2.1.2.

          3.5.2.  Nonlinear functions

          Routines for fitting with a nonlinear function in the l  norm are
                                                                 2
          provided in Chapter E04, and that chapter's Introduction should
          be consulted for the appropriate choice of routine. Again,
          however, the notation adopted is different from that we have used
          for data fitting. In the latter, we denote the fitting function
          by f(x;c), where x is the vector of independent variables and c
          is the vector of coefficients, whose values are to be determined.
          The squared l  norm, to be minimized with respect to the elements
                       2
          of c, is then

                               m
                               --  2            2
                               >  w [y -f(x ;c)]                       (23)
                               --  r  r    r
                               r=1

          where y  is the rth data value of the dependent variable, x  is
                 r                                                   r
          the vector containing the rth values of the independent
          variables, and w  is the corresponding weight as defined in
                          r
          Section 2.1.2.

          On the other hand, in the nonlinear least-squares routines of
          Chapter E04, the function to be minimized is denoted by

                                   m
                                   --  2
                                   >  f (x),                           (24)
                                   --  i
                                   i=1

          the minimization being carried out with respect to the elements
          of the vector x. The correspondence between the two notations is
          given by

          x==c and

          f (x)==w [y -f(x ;c)],  i=r=1,2,...,m.
           i      r  r    r

          Note especially that the vector x of variables of the nonlinear
          least-squares routines is the vector c of coefficients of the
          data-fitting problem, and in particular that, if the selected
          routine requires derivatives of the f (x) to be provided, these
                                               i
          are derivatives of w [y -f(x ;c)] with respect to the
                              r  r    r
          coefficients of the data-fitting problem.

          3.6. Constraints

          At present, there are only a limited number of routines which fit
          subject to constraints. Chapter E04 contains a routine, E04UCF,
          which can be used for fitting with a nonlinear function in the l
                                                                          2
          norm subject to equality or inequality constraints. This routine,
          unlike those in that chapter suited to the unconstrained case, is
          not designed specifically for minimizing functions which are sums
          of squares, and so the function (23) has to be treated as a
          general nonlinear function. The E04 Chapter Introduction should
          be consulted.

          The remaining constraint routine relates to fitting with
          polynomials in the l  norm. E02AGF deals with polynomial curves
                              2
          and allows precise values of the fitting function and (if
          required) all its derivatives up to a given order to be
          prescribed at one or more values of the independent variable.

          3.7. Evaluation, Differentiation and Integration

          Routines are available to evaluate, differentiate and integrate
          polynomials in Chebyshev-series form and cubic or bicubic splines
          in B-spline form. These polynomials and splines may have been
          produced by the various fitting routines or, in the case of
          polynomials, from prior calls of the differentiation and
          integration routines themselves.

          E02AEF and E02AKF evaluate polynomial curves: the latter has a
          longer parameter list but does not require the user to normalise
          the values of the independent variable and can accept
          coefficients which are not stored in contiguous locations. E02BBF
          evaluates cubic spline curves, and E02DEF and E02DFF bicubic
          spline surfaces.

          Differentiation and integration of polynomial curves are carried
          out by E02AHF and E02AJF respectively. The results are provided
          in Chebyshev-series form and so repeated differentiation and
          integration are catered for. Values of the derivative or integral
          can then be computed using the appropriate evaluation routine.

          For splines the differentiation and integration routines provided
          are of a different nature from those for polynomials. E02BCF
          provides values of a cubic spline curve and its first three
          derivatives (the rest, of course, are zero) at a given value of x
          spline over its whole range. These routines can also be applied
          to surfaces of the form (14). For example, if, for each value of
          j in turn, the coefficients c  , for i=1,2,...,p are supplied to
                                       ij
          E02BCF with x=x  and on each occasion we select from the output
                         0
          the value of the second derivative, d  say, and if the whole set
                                               j
          of d  are then supplied to the same routine with x=y , the output
              j                                               0
          will contain all the values at (x ,y ) of
                                           0  0

                             2             r+2
                           dd f          dd   f
                           -----  and   --------,  r=1,2,3.
                             2             2   r
                           dd fx        ddx ddy

          Equally, if after each of the first p calls of E02BCF we had
          selected the function value (E02BBF would also provide this)
          instead of the second derivative and we had supplied these values
          to E02BDF, the result obtained would have been the value of

                                     B
                                     /
                                     |f(x ,y)dy,
                                     /   0
                                     A

          where A and B are the end-points of the y interval over which the
          spline was defined.

          3.8. Index

          Automatic fitting,
               with bicubic splines                                  E02DCF
                                                                     E02DDF
               with cubic splines                                    E02BEF
          Data on rectangular mesh                                   E02DCF
          Differentiation,
               of cubic splines                                      E02BCF
               of polynomials                                        E02AHF
          Evaluation,
               of bicubic splines                                    E02DEF
                                                                     E02DFF
               of cubic splines                                      E02BBF
               of cubic splines and derivatives                      E02BCF
               of definite integral of cubic splines                 E02BDF
               of polynomials                                        E02AEF
                                                                     E02AKF
          Integration,
               of cubic splines (definite integral)                  E02BDF
               of polynomials                                        E02AJF
          Least-squares curve fit,
               with cubic splines                                    E02BAF
               with polynomials,
                    arbitrary data points                            E02ADF
                    with constraints                                 E02AGF
          Least-squares surface fit with bicubic splines             E02DAF
          l  fit with general linear function,                       E02GAF
           1
          Sorting,
               2-D data into panels                                  E02ZAF


          E02 -- Curve and Surface Fitting                  Contents -- E02
          Chapter E02

          Curve and Surface Fitting

          E02ADF  Least-squares curve fit, by polynomials, arbitrary data
                  points

          E02AEF  Evaluation of fitted polynomial in one variable from
                  Chebyshev series form (simplified parameter list)

          E02AGF  Least-squares polynomial fit, values and derivatives may
                  be constrained, arbitrary data points,

          E02AHF  Derivative of fitted polynomial in Chebyshev series form

          E02AJF  Integral of fitted polynomial in Chebyshev series form

          E02AKF  Evaluation of fitted polynomial in one variable, from
                  Chebyshev series form

          E02BAF  Least-squares curve cubic spline fit (including
                  interpolation)

          E02BBF  Evaluation of fitted cubic spline, function only

          E02BCF  Evaluation of fitted cubic spline, function and
                  derivatives

          E02BDF  Evaluation of fitted cubic spline, definite integral

          E02BEF  Least-squares cubic spline curve fit, automatic knot
                  placement

          E02DAF  Least-squares surface fit, bicubic splines

          E02DCF  Least-squares surface fit by bicubic splines with
                  automatic knot placement, data on rectangular grid

          E02DDF  Least-squares surface fit by bicubic splines with
                  automatic knot placement, scattered data

          E02DEF  Evaluation of a fitted bicubic spline at a vector of
                  points

          E02DFF  Evaluation of a fitted bicubic spline at a mesh of points

          E02GAF  L -approximation by general linear function
                   1

          E02ZAF  Sort 2-D data into panels for fitting bicubic splines

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02adf}{NAG On-line Documentation: e02adf}
\beginscroll
\begin{verbatim}



     E02ADF(3NAG)      Foundation Library (12/10/92)      E02ADF(3NAG)



          E02 -- Curve and Surface Fitting                           E02ADF
                  E02ADF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02ADF computes weighted least-squares polynomial approximations
          to an arbitrary set of data points.

          2. Specification

                 SUBROUTINE E02ADF (M, KPLUS1, NROWS, X, Y, W, WORK1,
                1                   WORK2, A, S, IFAIL)
                 INTEGER          M, KPLUS1, NROWS, IFAIL
                 DOUBLE PRECISION X(M), Y(M), W(M), WORK1(3*M), WORK2
                1                 (2*KPLUS1), A(NROWS,KPLUS1), S(KPLUS1)

          3. Description

          This routine determines least-squares polynomial approximations
          of degrees 0,1,...,k to the set of data points (x ,y ) with
                                                           r  r
          weights w , for r=1,2,...,m.
                   r

          The approximation of degree i has the property that it minimizes
          (sigma)  the sum of squares of the weighted residuals (epsilon) ,
                 i                                                       r
          where

                                (epsilon) =w (y -f )
                                         r  r  r  r

          and f  is the value of the polynomial of degree i at the rth data
               r
          point.

          Each polynomial is represented in Chebyshev-series form with
                              

          normalised argument x. This argument lies in the range -1 to +1
          and is related to the original variable x by the linear
          transformation

                                    (2x-x   -x   )
                                         max  min
                                 x= --------------.
                                     (x   -x   )
                                       max  min

          Here x    and x    are respectively the largest and smallest
                max      min
          values of x . The polynomial approximation of degree i is
                     r
          represented as

               1                                                   
               -a     T (x)+a     T (x)+a     T (x)+...+a       T (x),
               2 i+1,1 0     i+1,2 1     i+1,3 2         i+1,i+1 i

                   

          where T (x) is the Chebyshev polynomial of the first kind of
                 j
                                  

          degree j with argument (x).

          For i=0,1,...,k, the routine produces the values of a       , for
                                                               i+1,j+1
          j=0,1,...,i, together with the value of the root mean square
                           

                          / (sigma)
                         /         i
          residual s =  /   --------. In the case m=i+1 the routine sets
                    i \/     m-i-1
          the value of s  to zero.
                        i

          The method employed is due to Forsythe [4] and is based upon the
          generation of a set of polynomials orthogonal with respect to
          summation over the normalised data set. The extensions due to
          Clenshaw [1] to represent these polynomials as well as the
          approximating polynomials in their Chebyshev-series forms are
          incorporated. The modifications suggested by Reinsch and
          Gentleman (see [5]) to the method originally employed by Clenshaw
          for evaluating the orthogonal polynomials from their Chebyshev-
          series representations are used to give greater numerical
          stability.

          For further details of the algorithm and its use see Cox [2] and
          [3].

          Subsequent evaluation of the Chebyshev-series representations of
          the polynomial approximations should be carried out using E02AEF.

          4. References

          [1]   Clenshaw C W (1960) Curve Fitting with a Digital Computer.
                Comput. J. 2 170--173.

          [2]   Cox M G (1974) A Data-fitting Package for the Non-specialist
                User. Software for Numerical Mathematics. (ed D J Evans)
                Academic Press.

          [3]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [4]   Forsythe G E (1957) Generation and use of orthogonal
                polynomials for data fitting with a digital computer. J.
                Soc. Indust. Appl. Math. 5 74--88.

          [5]   Gentlemen W M (1969) An Error Analysis of Goertzel's
                (Watt's) Method for Computing Fourier Coefficients. Comput.
                J. 12 160--165.

          [6]   Hayes J G (1970) Curve Fitting by Polynomials in One
                Variable. Numerical Approximation to Functions and Data. (ed
                J G Hayes) Athlone Press, London.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: the number m of data points. Constraint: M >=
               MDIST >= 2, where MDIST is the number of distinct x values
               in the data.

           2:  KPLUS1 -- INTEGER                                      Input
               On entry: k+1, where k is the maximum degree required.
               Constraint: 0 < KPLUS1 <= MDIST, where MDIST is the number
               of distinct x values in the data.

           3:  NROWS -- INTEGER                                       Input
               On entry:
               the first dimension of the array A as declared in the
               (sub)program from which E02ADF is called.
               Constraint: NROWS >= KPLUS1.

           4:  X(M) -- DOUBLE PRECISION array                         Input
               On entry: the values x  of the independent variable, for
                                     r
               r=1,2,...,m. Constraint: the values must be supplied in non-
               decreasing order with X(M) > X(1).

           5:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: the values y  of the dependent variable, for
                                     r
               r=1,2,...,m.

           6:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: the set of weights, w , for r=1,2,...,m. For
                                              r
               advice on the choice of weights, see Section 2.1.2 of the
               Chapter Introduction. Constraint: W(r) > 0.0, for r=1,2,...m.

           7:  WORK1(3*M) -- DOUBLE PRECISION array               Workspace

           8:  WORK2(2*KPLUS1) -- DOUBLE PRECISION array          Workspace

           9:  A(NROWS,KPLUS1) -- DOUBLE PRECISION array             Output
                                                

               On exit: the coefficients of T (x) in the approximating
                                             j
               polynomial of degree i. A(i+1,j+1) contains the coefficient
               a       , for i=0,1,...,k; j=0,1,...,i.
                i+1,j+1

          10:  S(KPLUS1) -- DOUBLE PRECISION array                   Output
               On exit: S(i+1) contains the root mean square residual s ,
                                                                       i
               for i=0,1,...,k, as described in Section 3. For the
               interpretation of the values of the s  and their use in
                                                    i
               selecting an appropriate degree, see Section 3.1 of the
               Chapter Introduction.

          11:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               The weights are not all strictly positive.

          IFAIL= 2
               The values of X(r), for r=1,2,...,M are not in non-
               decreasing order.

          IFAIL= 3
               All X(r) have the same value: thus the normalisation of X is
               not possible.

          IFAIL= 4
               On entry KPLUS1 < 1 (so the maximum degree required is
                        negative)

               or       KPLUS1 > MDIST, where MDIST is the number of
                        distinct x values in the data (so there cannot be a
                        unique solution for degree k=KPLUS1-1).

          IFAIL= 5
               NROWS < KPLUS1.

          7. Accuracy

          No error analysis for the method has been published. Practical
          experience with the method, however, is generally extremely
          satisfactory.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          m(k+1)(k+11).

          The approximating polynomials may exhibit undesirable
          oscillations (particularly near the ends of the range) if the
          maximum degree k exceeds a critical value which depends on the
          number of data points m and their relative positions. As a rough
          guide, for equally-spaced data, this critical value is about
              

          2*\/m. For further details see Hayes [6] page 60.

          9. Example

          Determine weighted least-squares polynomial approximations of
          degrees 0, 1, 2 and 3 to a set of 11 prescribed data points. For
          the approximation of degree 3, tabulate the data and the
          corresponding values of the approximating polynomial, together
          with the residual errors, and also the values of the
          approximating polynomial at points half-way between each pair of
          adjacent data points.

          The example program supplied is written in a general form that
          will enable polynomial approximations of degrees 0,1,...,k to be
          obtained to m data points, with arbitrary positive weights, and
          the approximation of degree k to be tabulated. E02AEF is used to
          evaluate the approximating polynomial. The program is self-
          starting in that any number of data sets can be supplied.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02aef}{NAG On-line Documentation: e02aef}
\beginscroll
\begin{verbatim}



     E02AEF(3NAG)      Foundation Library (12/10/92)      E02AEF(3NAG)



          E02 -- Curve and Surface Fitting                           E02AEF
                  E02AEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02AEF evaluates a polynomial from its Chebyshev-series
          representation.

          2. Specification

                 SUBROUTINE E02AEF (NPLUS1, A, XCAP, P, IFAIL)
                 INTEGER          NPLUS1, IFAIL
                 DOUBLE PRECISION A(NPLUS1), XCAP, P

          3. Description

          This routine evaluates the polynomial

                        1                                   
                        -a T (x)+a T (x)+a T (x)+...+a   T (x)
                        2 1 0     2 1     3 2         n+1 n

                                                          

          for any value of x satisfying -1<=x<=1. Here T (x) denotes the
                                                        j
          Chebyshev polynomial of the first kind of degree j with argument
          

          x. The value of n is prescribed by the user.

                                    

          In practice, the variable x will usually have been obtained from
          an original variable x, where x   <=x<=x    and
                                         min      max

                                  ((x-x   )-(x   -x))
                                       min    max
                               x= -------------------
                                      (x   -x   )
                                        max  min

          Note that this form of the transformation should be used
          computationally rather than the mathematical equivalent

                                     (2x-x   -x   )
                                          min  max
                                  x= --------------
                                      (x   -x   )
                                        max  min

                                                                 

          since the former guarantees that the computed value of x differs
          from its true value by at most 4(epsilon), where (epsilon) is the
          machine precision, whereas the latter has no such guarantee.

          The method employed is based upon the three-term recurrence
          relation due to Clenshaw [1], with modifications to give greater
          numerical stability due to Reinsch and Gentleman (see [4]).

          For further details of the algorithm and its use see Cox [2] and
          [3].

          4. References

          [1]   Clenshaw C W (1955) A Note on the Summation of Chebyshev
                Series. Math. Tables Aids Comput. 9 118--120.

          [2]   Cox M G (1974) A Data-fitting Package for the Non-specialist
                User. Software for Numerical Mathematics. (ed D J Evans)
                Academic Press.

          [3]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [4]   Gentlemen W M (1969) An Error Analysis of Goertzel's
                (Watt's) Method for Computing Fourier Coefficients. Comput.
                J. 12 160--165.

          5. Parameters

           1:  NPLUS1 -- INTEGER                                      Input
               On entry: the number n+1 of terms in the series (i.e., one
               greater than the degree of the polynomial). Constraint:
               NPLUS1 >= 1.

           2:  A(NPLUS1) -- DOUBLE PRECISION array                    Input
               On entry: A(i) must be set to the value of the ith
               coefficient in the series, for i=1,2,...,n+1.

           3:  XCAP -- DOUBLE PRECISION                               Input
                         

               On entry: x, the argument at which the polynomial is to be
               evaluated. It should lie in the range -1 to +1, but a value
               just outside this range is permitted (see Section 6) to
               allow for possible rounding errors committed in the
                                        

               transformation from x to x discussed in Section 3. Provided
               the recommended form of the transformation is used, a
               successful exit is thus assured whenever the value of x lies
               in the range x    to x   .
                             min     max

           4:  P -- DOUBLE PRECISION                                 Output
               On exit: the value of the polynomial.

           5:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               ABS(XCAP) > 1.0 + 4(epsilon), where (epsilon) is the
               machine precision. In this case the value of P is set
               arbitrarily to zero.

          IFAIL= 2
               On entry NPLUS1 < 1.

          7. Accuracy

          The rounding errors committed are such that the computed value of
          the polynomial is exact for a slightly perturbed set of
          coefficients a +(delta)a . The ratio of the sum of the absolute
                        i         i
          values of the (delta)a  to the sum of the absolute values of the
                                i
          a  is less than a small multiple of (n+1) times machine
           i
          precision.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          n+1.

          It is expected that a common use of E02AEF will be the evaluation
          of the polynomial approximations produced by E02ADF and E02AFF(*)

          9. Example

                                                                   

          Evaluate at 11 equally-spaced points in the interval -1<=x<=1 the
          polynomial of degree 4 with Chebyshev coefficients, 2.0, 0.5, 0.
          25, 0.125, 0.0625.

          The example program is written in a general form that will enable
          a polynomial of degree n in its Chebyshev-series form to be
                                                                   

          evaluated at m equally-spaced points in the interval -1<=x<=1.
          The program is self-starting in that any number of data sets can
          be supplied.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02agf}{NAG On-line Documentation: e02agf}
\beginscroll
\begin{verbatim}



     E02AGF(3NAG)      Foundation Library (12/10/92)      E02AGF(3NAG)



          E02 -- Curve and Surface Fitting                           E02AGF
                  E02AGF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02AGF computes constrained weighted least-squares polynomial
          approximations in Chebyshev-series form to an arbitrary set of
          data points. The values of the approximations and any number of
          their derivatives can be specified at selected points.

          2. Specification

                 SUBROUTINE E02AGF (M, KPLUS1, NROWS, XMIN, XMAX, X, Y, W,
                1                   MF, XF, YF, LYF, IP, A, S, NP1, WRK,
                2                   LWRK, IWRK, LIWRK, IFAIL)
                 INTEGER          M, KPLUS1, NROWS, MF, LYF, IP(MF), NP1,
                1                 LWRK, IWRK(LIWRK), LIWRK, IFAIL
                 DOUBLE PRECISION XMIN, XMAX, X(M), Y(M), W(M), XF(MF), YF
                1                 (LYF), A(NROWS,KPLUS1), S(KPLUS1), WRK
                2                 (LWRK)

          3. Description

          This routine determines least-squares polynomial approximations
          of degrees up to k to the set of data points (x ,y ) with weights
                                                         r  r
          w , for r=1,2,...,m. The value of k, the maximum degree required,
           r
          is prescribed by the user. At each of the values XF , for r =
                                                             r
          1,2,...,MF, of the independent variable x, the approximations and
          their derivatives up to order p  are constrained to have one of
                                         r
                                                                      MF
                                                                      --
          the user-specified values YF , for s=1,2,...,n, where n=MF+ >  p
                                      s                               --  r
                                                                      r=1

          The approximation of degree i has the property that, subject to
          the imposed contraints, it minimizes (Sigma) , the sum of the
                                                      i
          squares of the weighted residuals (epsilon)  for r=1,2,...,m
                                                     r
          where

                              (epsilon) =w (y -f (x ))
                                       r  r  r  i  r

          and f (x ) is the value of the polynomial approximation of degree
               i  r
          i at the rth data point.

          Each polynomial is represented in Chebyshev-series form with
                              

          normalised argument x. This argument lies in the range -1 to +1
          and is related to the original variable x by the linear
          transformation

                                     2x-(x   +x   )
                                          max  min
                                  x= --------------
                                      (x   -x   )
                                        max  min

          where x    and x   , specified by the user, are respectively the
                 min      max
          lower and upper end-points of the interval of x over which the
          polynomials are to be defined.

          The polynomial approximation of degree i can be written as

                      1                                      
                      -a   +a   T (x)+...+a  T (x)+...+a  T (x)
                      2 i,0  i,1 1         ij j         ii i

                   

          where T (x) is the Chebyshev polynomial of the first kind of
                 j
                                 

          degree j with argument x. For i=n,n+1,...,k, the routine produces
          the values of the coefficients a  , for j=0,1,...,i, together
                                          ij
          with the value of the root mean square residual, S , defined as
                                                            i
                 

                /     --
               /      >
              /       --
             /        i
            /     ----------, where m' is the number of data points with
          \/      (m'+n-i-1)
          non-zero weight.

          Values of the approximations may subsequently be computed using
          E02AEF or E02AKF.

                                                    

          First E02AGF determines a polynomial (mu)(x), of degree n-1,
          which satisfies the given constraints, and a polynomial (nu)(x),
          of degree n, which has value (or derivative) zero wherever a
          constrained value (or derivative) is specified. It then fits
          y -(mu)(x ), for r=1,2,...,m with polynomials of the required
           r       r
                                            

          degree in x each with factor (nu)(x). Finally the coefficients of
               

          (mu)(x) are added to the coefficients of these fits to give the
          coefficients of the constrained polynomial approximations to the
          data points (x ,y ), for r=1,2,...,m. The method employed is
                        r  r
          given in Hayes [3]: it is an extension of Forsythe's orthogonal
          polynomials method [2] as modified by Clenshaw [1].

          4. References

          [1]   Clenshaw C W (1960) Curve Fitting with a Digital Computer.
                Comput. J. 2 170--173.

          [2]   Forsythe G E (1957) Generation and use of orthogonal
                polynomials for data fitting with a digital computer. J.
                Soc. Indust. Appl. Math. 5 74--88.

          [3]   Hayes J G (1970) Curve Fitting by Polynomials in One
                Variable. Numerical Approximation to Functions and Data. (ed
                J G Hayes) Athlone Press, London.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: the number m of data points to be fitted.
               Constraint: M >= 1.

           2:  KPLUS1 -- INTEGER                                      Input
               On entry: k+1, where k is the maximum degree required.
               Constraint: n+1<=KPLUS1<=m''+n, where n is the total number
               of constraints and m'' is the number of data points with
               non-zero weights and distinct abscissae which do not
               coincide with any of the XF(r).

           3:  NROWS -- INTEGER                                       Input
               On entry:
               the first dimension of the array A as declared in the
               (sub)program from which E02AGF is called.
               Constraint: NROWS >= KPLUS1.

           4:  XMIN -- DOUBLE PRECISION                               Input

           5:  XMAX -- DOUBLE PRECISION                               Input
               On entry: the lower and upper end-points, respectively, of
               the interval [x   ,x   ]. Unless there are specific reasons
                              min  max
               to the contrary, it is recommended that XMIN and XMAX be set
               respectively to the lowest and highest value among the x
                                                                       r
               and XF(r). This avoids the danger of extrapolation provided
               there is a constraint point or data point with non-zero
               weight at each end-point. Constraint: XMAX > XMIN.

           6:  X(M) -- DOUBLE PRECISION array                         Input
               On entry: the value x  of the independent variable at the r
                                    r
               th data point, for r=1,2,...,m. Constraint: the X(r) must be
               in non-decreasing order and satisfy XMIN <= X(r) <= XMAX.

           7:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: Y(r) must contain y , the value of the dependent
                                            r
               variable at the rth data point, for r=1,2,...,m.

           8:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: the weights w  to be applied to the data points
                                      r
               x , for r=1,2...,m. For advice on the choice of weights see
                r
               the Chapter Introduction. Negative weights are treated as
               positive. A zero weight causes the corresponding data point
               to be ignored. Zero weight should be given to any data point
               whose x and y values both coincide with those of a
               constraint (otherwise the denominators involved in the root-
               mean-square residuals s  will be slightly in error).
                                      i

           9:  MF -- INTEGER                                          Input
               On entry: the number of values of the independent variable
               at which a constraint is specified. Constraint: MF >= 1.

          10:  XF(MF) -- DOUBLE PRECISION array                       Input
               On entry: the rth value of the independent variable at
               which a constraint is specified, for r = 1,2,...,MF.
               Constraint: these values need not be ordered but must be
               distinct and satisfy XMIN <= XF(r) <= XMAX.

          11:  YF(LYF) -- DOUBLE PRECISION array                      Input
               On entry: the values which the approximating polynomials
               and their derivatives are required to take at the points
               specified in XF. For each value of XF(r), YF contains in
               successive elements the required value of the approximation,
               its first derivative, second derivative,..., p th
                                                             r
               derivative, for r = 1,2,...,MF. Thus the value which the kth
               derivative of each approximation (k=0 referring to the
               approximation itself) is required to take at the point XF(r)
               must be contained in YF(s), where
                                  s=r+k+p +p +...+p   ,
                                         1  2      r-1
               for k=0,1,...,p  and r = 1,2,...,MF. The derivatives are
                              r
               with respect to the user's variable x.

          12:  LYF -- INTEGER                                         Input
               On entry:
               the dimension of the array YF as declared in the
               (sub)program from which E02AGF is called.
               Constraint: LYF>=n, where n=MF+p +p +...+p  .
                                               1  2      MF

          13:  IP(MF) -- INTEGER array                                Input
               On entry: IP(r) must contain p , the order of the highest-
                                             r
               order derivative specified at XF(r), for r = 1,2,...,MF.
               p =0 implies that the value of the approximation at XF(r) is
                r
               specified, but not that of any derivative. Constraint: IP(r)
               >= 0, for r=1,2,...,MF.

          14:  A(NROWS,KPLUS1) -- DOUBLE PRECISION array             Output
               On exit: A(i+1,j+1) contains the coefficient a   in the
                                                             ij
               approximating polynomial of degree i, for i=n,n+1,...,k;
               j=0,1,...,i.

          15:  S(KPLUS1) -- DOUBLE PRECISION array                   Output
               On exit: S(i+1) contains s , for i=n,n+1,...,k, the root-
                                          i
               mean-square residual corresponding to the approximating
               polynomial of degree i. In the case where the number of data
               points with non-zero weight is equal to k+1-n, s  is
                                                               i
               indeterminate: the routine sets it to zero. For the
               interpretation of the values of s  and their use in
                                                i
               selecting an appropriate degree, see Section 3.1 of the
               Chapter Introduction.

          16:  NP1 -- INTEGER                                        Output
               On exit: n+1, where n is the total number of constraint
               conditions imposed: n=MF+p +p +...+p  .
                                         1  2      MF

          17:  WRK(LWRK) -- DOUBLE PRECISION array                   Output
               On exit: WRK contains weighted residuals of the highest
               degree of fit determined (k). The residual at x  is in
               element 2(n+1)+3(m+k+1)+r, for r=1,2,...,m. The rest of the
               array is used as workspace.

          18:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E02AGF is called.
               Constraint: LWRK>=max(4*M+3*KPLUS1, 8*n+5*IPMAX+MF+10)+2*n+2
               , where IPMAX = max(IP(R)).

          19:  IWRK(LIWRK) -- INTEGER array                       Workspace

          20:  LIWRK -- INTEGER                                       Input
               On entry:
               the dimension of the array IWRK as declared in the
               (sub)program from which E02AGF is called.
               Constraint: LIWRK>=2*MF+2.

          21:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               On entry M < 1,

               or       KPLUS1 < n + 1,

               or       NROWS < KPLUS1,

               or       MF < 1,

               or       LYF < n,

               or       LWRK is too small (see Section 5),

               or       LIWRK<2*MF+2.
               (Here n is the total number of constraint conditions.)

          IFAIL= 2
               IP(r) < 0 for some r = 1,2,...,MF.

          IFAIL= 3
               XMIN >= XMAX, or XF(r) is not in the interval XMIN to XMAX
               for some r = 1,2,...,MF, or the XF(r) are not distinct.

          IFAIL= 4
               X(r) is not in the interval XMIN to XMAX for some
               r=1,2,...,M.

          IFAIL= 5
               X(r) < X(r-1) for some r=2,3,...,M.

          IFAIL= 6
               KPLUS1>m''+n, where m'' is the number of data points with
               non-zero weight and distinct abscissae which do not coincide
               with any XF(r). Thus there is no unique solution.

          IFAIL= 7
               The polynomials (mu)(x) and/or (nu)(x) cannot be determined.
               The problem supplied is too ill-conditioned. This may occur
               when the constraint points are very close together, or large
               in number, or when an attempt is made to constrain high-
               order derivatives.

          7. Accuracy

          No complete error analysis exists for either the interpolating
          algorithm or the approximating algorithm. However, considerable
          experience with the approximating algorithm shows that it is
          generally extremely satisfactory. Also the moderate number of
          constraints, of low order, which are typical of data fitting
          applications, are unlikely to cause difficulty with the
          interpolating routine.

          8. Further Comments

          The time taken by the routine to form the interpolating
                                                       3
          polynomial is approximately proportional to n , and that to form
          the approximating polynomials is very approximately proportional
          to m(k+1)(k+1-n).

          To carry out a least-squares polynomial fit without constraints,
          use E02ADF. To carry out polynomial interpolation only, use
          E01AEF(*).

          9. Example

          The example program reads data in the following order, using the
          notation of the parameter list above:

               MF

               IP(i), XF(i), Y-value and derivative values (if any) at
               XF(i), for i= 1,2,...,MF

               M

               X(i), Y(i), W(i), for i=1,2,...,M

               k, XMIN, XMAX

          The output is:

               the root-mean-square residual for each degree from n to k;

               the Chebyshev coefficients for the fit of degree k;

               the data points, and the fitted values and residuals for
               the fit of degree k.

          The program is written in a generalized form which will read any
          number of data sets.

          The data set supplied specifies 5 data points in the interval [0.
          0,4.0] with unit weights, to which are to be fitted polynomials,
          p, of degrees up to 4, subject to the 3 constraints:

               p(0.0)=1.0,  p'(0.0)=-2.0,  p(4.0)=9.0.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02ahf}{NAG On-line Documentation: e02ahf}
\beginscroll
\begin{verbatim}



     E02AHF(3NAG)      Foundation Library (12/10/92)      E02AHF(3NAG)



          E02 -- Curve and Surface Fitting                           E02AHF
                  E02AHF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02AHF determines the coefficients in the Chebyshev-series
          representation of the derivative of a polynomial given in
          Chebyshev-series form.

          2. Specification

                 SUBROUTINE E02AHF (NP1, XMIN, XMAX, A, IA1, LA, PATM1,
                1                   ADIF, IADIF1, LADIF, IFAIL)
                 INTEGER          NP1, IA1, LA, IADIF1, LADIF, IFAIL
                 DOUBLE PRECISION XMIN, XMAX, A(LA), PATM1, ADIF(LADIF)

          3. Description

          This routine forms the polynomial which is the derivative of a
          given polynomial. Both the original polynomial and its derivative
          are represented in Chebyshev-series form. Given the coefficients
          a , for i=0,1,...,n, of a polynomial p(x) of degree n, where
           i

                                  1                    
                            p(x)= -a +a T (x)+...+a T (x)
                                  2 0  1 1         n n

                                               

          the routine returns the coefficients a , for i=0,1,...,n-1, of
                                                i
          the polynomial q(x) of degree n-1, where

                            dp(x)  1                        
                      q(x)= -----= -a +a T (x)+...+a   T   (x).
                             dx    2 0  1 1         n-1 n-1

                  

          Here T (x) denotes the Chebyshev polynomial of the first kind of
                j
                                 

          degree j with argument x. It is assumed that the normalised
                   

          variable x in the interval [-1,+1] was obtained from the user's
          original variable x in the interval [x   ,x   ] by the linear
                                                min  max
          transformation

                                     2x-(x   +x   )
                                          max  min
                                  x= --------------
                                       x   -x
                                        max  min

          and that the user requires the derivative to be with respect to
                                                            

          the variable x. If the derivative with respect to x is required,
          set x   =1 and x   =-1.
               max        min

          Values of the derivative can subsequently be computed, from the
          coefficients obtained, by using E02AKF.

          The method employed is that of [1] modified to obtain the
                                                               

          derivative with respect to x. Initially setting a   =a =0, the
                                                           n+1  n
          routine forms successively

                                    2
                     a   =a   + ---------2ia ,   i=n,n-1,...,1.
                      i-1  i+1  x   -x      i
                                 max  min

          4. References

          [1]   Unknown (1961) Chebyshev-series. Modern Computing Methods,
                Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
                HMSO.

          5. Parameters

           1:  NP1 -- INTEGER                                         Input
               On entry: n+1, where n is the degree of the given
               polynomial p(x). Thus NP1 is the number of coefficients in
               this polynomial. Constraint: NP1 >= 1.

           2:  XMIN -- DOUBLE PRECISION                               Input

           3:  XMAX -- DOUBLE PRECISION                               Input
               On entry: the lower and upper end-points respectively of
               the interval [x   ,x   ]. The Chebyshev-series
                              min  max
                                                                     

               representation is in terms of the normalised variable x,
               where
                                       2x-(x   +x   )
                                            max  min
                                    x= --------------.
                                         x   -x
                                          max  min
               Constraint: XMAX > XMIN.

           4:  A(LA) -- DOUBLE PRECISION array                        Input
               On entry: the Chebyshev coefficients of the polynomial p(x).
               Specifically, element 1 + i*IA1 of A must contain the
               coefficient a , for i=0,1,...,n. Only these n+1 elements
                            i
               will be accessed.

               Unchanged on exit, but see ADIF, below.

           5:  IA1 -- INTEGER                                         Input
               On entry: the index increment of A. Most frequently the
               Chebyshev coefficients are stored in adjacent elements of A,
               and IA1 must be set to 1. However, if, for example, they are
               stored in A(1),A(4),A(7),..., then the value of IA1 must be
               3. See also Section 8. Constraint: IA1 >= 1.

           6:  LA -- INTEGER                                          Input
               On entry:
               the dimension of the array A as declared in the (sub)program
               from which E02AHF is called.
               Constraint: LA>=1+(NP1-1)*IA1.

           7:  PATM1 -- DOUBLE PRECISION                             Output
               On exit: the value of p(x   ). If this value is passed to
                                         min
               the integration routine E02AJF with the coefficients of q(x)
               , then the original polynomial p(x) is recovered, including
               its constant coefficient.

           8:  ADIF(LADIF) -- DOUBLE PRECISION array                 Output
               On exit: the Chebyshev coefficients of the derived
               polynomial q(x). (The differentiation is with respect to the
               variable x). Specifically, element 1+i*IADIF1 of ADIF
                                        

               contains the coefficient a , i=0,1,...n-1. Additionally
                                         i
               element 1+n*IADIF1 is set to zero. A call of the routine may
               have the array name ADIF the same as A, provided that note
               is taken of the order in which elements are overwritten,
               when choosing the starting elements and increments IA1 and
               IADIF1: i.e., the coefficients a ,a ,...,a    must be intact
                                               0  1      i-1
                                 

               after coefficient a  is stored. In particular, it is
                                  i
               possible to overwrite the a  completely by having IA1 =
                                          i
               IADIF1, and the actual arrays for A and ADIF identical.

           9:  IADIF1 -- INTEGER                                      Input
               On entry: the index increment of ADIF. Most frequently the
               Chebyshev coefficients are required in adjacent elements of
               ADIF, and IADIF1 must be set to 1. However, if, for example,
               they are to be stored in ADIF(1),ADIF(4),ADIF(7),..., then
               the value of IADIF1 must be 3. See Section 8. Constraint:
               IADIF1 >= 1.

          10:  LADIF -- INTEGER                                       Input
               On entry:
               the dimension of the array ADIF as declared in the
               (sub)program from which E02AHF is called.
               Constraint: LADIF>=1+(NP1-1)*IADIF1.

          11:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               On entry NP1 < 1,

               or       XMAX <= XMIN,

               or       IA1 < 1,

               or       LA<=(NP1-1)*IA1,

               or       IADIF1 < 1,

               or       LADIF<=(NP1-1)*IADIF1.

          7. Accuracy

          There is always a loss of precision in numerical differentiation,
          in this case associated with the multiplication by 2i in the
          formula quoted in Section 3.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          n+1.

          The increments IA1, IADIF1 are included as parameters to give a
          degree of flexibility which, for example, allows a polynomial in
          two variables to be differentiated with respect to either
          variable without rearranging the coefficients.

          9. Example

          Suppose a polynomial has been computed in Chebyshev-series form
          to fit data over the interval [-0.5,2.5]. The example program
          evaluates the 1st and 2nd derivatives of this polynomial at 4
          equally spaced points over the interval. (For the purposes of
          this example, XMIN, XMAX and the Chebyshev coefficients are
          simply supplied in DATA statements. Normally a program would
          first read in or generate data and compute the fitted
          polynomial.)

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02ajf}{NAG On-line Documentation: e02ajf}
\beginscroll
\begin{verbatim}



     E02AJF(3NAG)      Foundation Library (12/10/92)      E02AJF(3NAG)



          E02 -- Curve and Surface Fitting                           E02AJF
                  E02AJF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02AJF determines the coefficients in the Chebyshev-series
          representation of the indefinite integral of a polynomial given
          in Chebyshev-series form.

          2. Specification

                 SUBROUTINE E02AJF (NP1, XMIN, XMAX, A, IA1, LA, QATM1,
                1                   AINT, IAINT1, LAINT, IFAIL)
                 INTEGER          NP1, IA1, LA, IAINT1, LAINT, IFAIL
                 DOUBLE PRECISION XMIN, XMAX, A(LA), QATM1, AINT(LAINT)

          3. Description

          This routine forms the polynomial which is the indefinite
          integral of a given polynomial. Both the original polynomial and
          its integral are represented in Chebyshev-series form. If
          supplied with the coefficients a , for i=0,1,...,n, of a
                                          i
          polynomial p(x) of degree n, where

                                 1                    
                           p(x)= -a +a T (x)+...+a T (x),
                                 2 0  1 1         n n

          the routine returns the coefficients a' , for i=0,1,...,n+1, of
                                                 i
          the polynomial q(x) of degree n+1, where

                              1                           
                        q(x)= -a' +a' T (x)+...+a'   T   (x),
                              2  0   1 1          n+1 n+1

          and

                                         /
                                   q(x)= |p(x)dx.
                                         /

                  

          Here T (x) denotes the Chebyshev polynomial of the first kind of
                j
          degree j with argument x. It is assumed that the normalised
                   

          variable x in the interval [-1,+1] was obtained from the user's
          original variable x in the interval [x   ,x   ] by the linear
                                                min  max
          transformation

                                     2x-(x   +x   )
                                          max  min
                                  x= --------------
                                       x   -x
                                        max  min

          and that the user requires the integral to be with respect to the
                                                      

          variable x. If the integral with respect to x is required, set
          x   =1 and x   =-1.
           max        min

          Values of the integral can subsequently be computed, from the
          coefficients obtained, by using E02AKF.

          The method employed is that of Chebyshev-series [1] modified for
          integrating with respect to x. Initially taking a   =a   =0, the
                                                           n+1  n+2
          routine forms successively

                          a   -a     x   -x
                           i-1  i+1   max  min
                     a' = ---------* ---------,   i=n+1,n,...,1.
                       i     2i          2

          The constant coefficient a'  is chosen so that q(x) is equal to a
                                     0
          specified value, QATM1, at the lower end-point of the interval on
                                     

          which it is defined, i.e., x=-1, which corresponds to x=x   .
                                                                   min

          4. References

          [1]   Unknown (1961) Chebyshev-series. Modern Computing Methods,
                Chapter 8. NPL Notes on Applied Science (2nd Edition). 16
                HMSO.

          5. Parameters

           1:  NP1 -- INTEGER                                         Input
               On entry: n+1, where n is the degree of the given
               polynomial p(x). Thus NP1 is the number of coefficients in
               this polynomial. Constraint: NP1 >= 1.

           2:  XMIN -- DOUBLE PRECISION                               Input

           3:  XMAX -- DOUBLE PRECISION                               Input
               On entry: the lower and upper end-points respectively of
               the interval [x   ,x   ]. The Chebyshev-series
                              min  max
                                                                     

               representation is in terms of the normalised variable x,
               where
                                       2x-(x   +x   )
                                            max  min
                                    x= --------------.
                                         x   -x
                                          max  min
               Constraint: XMAX > XMIN.

           4:  A(LA) -- DOUBLE PRECISION array                        Input
               On entry: the Chebyshev coefficients of the polynomial p(x)
               . Specifically, element 1+i*IA1 of A must contain the
               coefficient a , for i=0,1,...,n. Only these n+1 elements
                            i
               will be accessed.

               Unchanged on exit, but see AINT, below.

           5:  IA1 -- INTEGER                                         Input
               On entry: the index increment of A. Most frequently the
               Chebyshev coefficients are stored in adjacent elements of A,
               and IA1 must be set to 1. However, if for example, they are
               stored in A(1),A(4),A(7),..., then the value of IA1 must be
               3. See also Section 8. Constraint: IA1 >= 1.

           6:  LA -- INTEGER                                          Input
               On entry:
               the dimension of the array A as declared in the (sub)program
               from which E02AJF is called.
               Constraint: LA>=1+(NP1-1)*IA1.

           7:  QATM1 -- DOUBLE PRECISION                              Input
               On entry: the value that the integrated polynomial is
               required to have at the lower end-point of its interval of
                                    

               definition, i.e., at x=-1 which corresponds to x=x   . Thus,
                                                                 min
               QATM1 is a constant of integration and will normally be set
               to zero by the user.

           8:  AINT(LAINT) -- DOUBLE PRECISION array                 Output
               On exit: the Chebyshev coefficients of the integral q(x).
               (The integration is with respect to the variable x, and the
               constant coefficient is chosen so that q(x   ) equals QATM1)
                                                         min
               Specifically, element 1+i*IAINT1 of AINT contains the
               coefficient a' , for i=0,1,...,n+1. A call of the routine
                             i
               may have the array name AINT the same as A, provided that
               note is taken of the order in which elements are overwritten
               when choosing starting elements and increments IA1 and
               IAINT1: i.e., the coefficients, a ,a ,...,a    must be
                                                0  1      i-2
               intact after coefficient a'  is stored. In particular it is
                                          i
               possible to overwrite the a  entirely by having IA1 =
                                          i
               IAINT1, and the actual array for A and AINT identical.

           9:  IAINT1 -- INTEGER                                      Input
               On entry: the index increment of AINT. Most frequently the
               Chebyshev coefficients are required in adjacent elements of
               AINT, and IAINT1 must be set to 1. However, if, for example,
               they are to be stored in AINT(1),AINT(4),AINT(7),..., then
               the value of IAINT1 must be 3. See also Section 8.
               Constraint: IAINT1 >= 1.

          10:  LAINT -- INTEGER                                       Input
               On entry:
               the dimension of the array AINT as declared in the
               (sub)program from which E02AJF is called.
               Constraint: LAINT>=1+NP1*IAINT1.

          11:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               On entry NP1 < 1,

               or       XMAX <= XMIN,

               or       IA1 < 1,

               or       LA<=(NP1-1)*IA1,

               or       IAINT1 < 1,

               or       LAINT<=NP1*IAINT1.

          7. Accuracy

          In general there is a gain in precision in numerical integration,
          in this case associated with the division by 2i in the formula
          quoted in Section 3.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          n+1.

          The increments IA1, IAINT1 are included as parameters to give a
          degree of flexibility which, for example, allows a polynomial in
          two variables to be integrated with respect to either variable
          without rearranging the coefficients.

          9. Example

          Suppose a polynomial has been computed in Chebyshev-series form
          to fit data over the interval [-0.5,2.5]. The example program
          evaluates the integral of the polynomial from 0.0 to 2.0. (For
          the purpose of this example, XMIN, XMAX and the Chebyshev
          coefficients are simply supplied in DATA statements. Normally a
          program would read in or generate data and compute the fitted
          polynomial).

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02akf}{NAG On-line Documentation: e02akf}
\beginscroll
\begin{verbatim}



     E02AKF(3NAG)      Foundation Library (12/10/92)      E02AKF(3NAG)



          E02 -- Curve and Surface Fitting                           E02AKF
                  E02AKF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02AKF evaluates a polynomial from its Chebyshev-series
          representation, allowing an arbitrary index increment for
          accessing the array of coefficients.

          2. Specification

                 SUBROUTINE E02AKF (NP1, XMIN, XMAX, A, IA1, LA, X, RESULT,
                1                   IFAIL)
                 INTEGER          NP1, IA1, LA, IFAIL
                 DOUBLE PRECISION XMIN, XMAX, A(LA), X, RESULT

          3. Description

          If supplied with the coefficients a , for i=0,1,...,n, of a
                                             i
                       

          polynomial p(x) of degree n, where

                                 1                    
                           p(x)= -a +a T (x)+...+a T (x),
                                 2 0  1 1         n n

                                              

          this routine returns the value of p(x) at a user-specified value
                                     

          of the variable x. Here T (x) denotes the Chebyshev polynomial of
                                   j
                                                   

          the first kind of degree j with argument x. It is assumed that
                                   

          the independent variable x in the interval [-1,+1] was obtained
          from the user's original variable x in the interval [x   ,x   ]
                                                                min  max
          by the linear transformation

                                    2x-(x   +x   )
                                         max  min
                                 x= --------------.
                                      x   -x
                                       max  min

          The coefficients a  may be supplied in the array A, with any
                            i
          increment between the indices of array elements which contain
          successive coefficients. This enables the routine to be used in
          surface fitting and other applications, in which the array might
          have two or more dimensions.

          The method employed is based upon the three-term recurrence
          relation due to Clenshaw [1], with modifications due to Reinsch
          and Gentleman (see [4]). For further details of the algorithm and
          its use see Cox [2] and Cox and Hayes [3].

          4. References

          [1]   Clenshaw C W (1955) A Note on the Summation of Chebyshev-
                series. Math. Tables Aids Comput. 9 118--120.

          [2]   Cox M G (1973) A data-fitting package for the non-specialist
                user. Report NAC40. National Physical Laboratory.

          [3]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [4]   Gentlemen W M (1969) An Error Analysis of Goertzel's
                (Watt's) Method for Computing Fourier Coefficients. Comput.
                J. 12 160--165.

          5. Parameters

           1:  NP1 -- INTEGER                                         Input
               On entry: n+1, where n is the degree of the given
                            

               polynomial p(x). Constraint: NP1 >= 1.

           2:  XMIN -- DOUBLE PRECISION                               Input

           3:  XMAX -- DOUBLE PRECISION                               Input
               On entry: the lower and upper end-points respectively of
               the interval [x   ,x   ]. The Chebyshev-series
                              min  max
                                                                     

               representation is in terms of the normalised variable x,
               where
                                       2x-(x   +x   )
                                            max  min
                                    x= --------------.
                                         x   -x
                                          max  min
               Constraint: XMIN < XMAX.

           4:  A(LA) -- DOUBLE PRECISION array                        Input
               On entry: the Chebyshev coefficients of the polynomial p(x).
               Specifically, element 1+i*IA1 must contain the coefficient
               a , for i=0,1,...,n. Only these n+1 elements will be
                i
               accessed.

           5:  IA1 -- INTEGER                                         Input
               On entry: the index increment of A. Most frequently, the
               Chebyshev coefficients are stored in adjacent elements of A,
               and IA1 must be set to 1. However, if, for example, they are
               stored in A(1),A(4),A(7),..., then the value of IA1 must be
               3. Constraint: IA1 >= 1.

           6:  LA -- INTEGER                                          Input
               On entry:
               the dimension of the array A as declared in the (sub)program
               from which E02AKF is called.
               Constraint: LA>=(NP1-1)*IA1+1.

           7:  X -- DOUBLE PRECISION                                  Input
               On entry: the argument x at which the polynomial is to be
               evaluated. Constraint: XMIN <= X <= XMAX.

           8:  RESULT -- DOUBLE PRECISION                            Output
                                                      

               On exit: the value of the polynomial p(x).

           9:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               On entry NP1 < 1,

               or       IA1 < 1,

               or       LA<=(NP1-1)*IA1,

               or       XMIN >= XMAX.

          IFAIL= 2
               X does not satisfy the restriction XMIN <= X <= XMAX.

          7. Accuracy

          The rounding errors are such that the computed value of the
          polynomial is exact for a slightly perturbed set of coefficients
          a +(delta)a . The ratio of the sum of the absolute values of the
           i         i
          (delta)a  to the sum of the absolute values of the a  is less
                  i                                           i
          than a small multiple of (n+1)*machine precision.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          n+1.

          9. Example

          Suppose a polynomial has been computed in Chebyshev-series form
          to fit data over the interval [-0.5,2.5]. The example program
          evaluates the polynomial at 4 equally spaced points over the
          interval. (For the purposes of this example, XMIN, XMAX and the
          Chebyshev coefficients are supplied in DATA statements. Normally
          a program would first read in or generate data and compute the
          fitted polynomial.)

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02baf}{NAG On-line Documentation: e02baf}
\beginscroll
\begin{verbatim}



     E02BAF(3NAG)      Foundation Library (12/10/92)      E02BAF(3NAG)



          E02 -- Curve and Surface Fitting                           E02BAF
                  E02BAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02BAF computes a weighted least-squares approximation to an
          arbitrary set of data points by a cubic spline with knots
          prescribed by the user. Cubic spline interpolation can also be
          carried out.

          2. Specification

                 SUBROUTINE E02BAF (M, NCAP7, X, Y, W, LAMDA, WORK1, WORK2,
                1                   C, SS, IFAIL)
                 INTEGER          M, NCAP7, IFAIL
                 DOUBLE PRECISION X(M), Y(M), W(M), LAMDA(NCAP7), WORK1(M),
                1                 WORK2(4*NCAP7), C(NCAP7), SS

          3. Description

          This routine determines a least-squares cubic spline
          approximation s(x) to the set of data points (x ,y ) with weights
                                                         r  r
                                                               

          w , for r=1,2,...,m. The value of NCAP7 = n+7, where n is the
           r
          number of intervals of the spline (one greater than the number of
          interior knots), and the values of the knots
          (lambda) ,(lambda) ,...,(lambda)   , interior to the data
                  5         6             n+3
          interval, are prescribed by the user.

          s(x) has the property that it minimizes (theta), the sum of
          squares of the weighted residuals (epsilon) , for r=1,2,...,m,
                                                     r
          where

                              (epsilon) =w (y -s(x )).
                                       r  r  r    r

          The routine produces this minimizing value of (theta) and the
                                             

          coefficients c ,c ,...,c , where q=n+3, in the B-spline
                        1  2      q
          representation

                                        q
                                        --
                                  s(x)= >  c N (x).
                                        --  i i
                                        i=1

          Here N (x) denotes the normalised B-spline of degree 3 defined
                i
          upon the knots (lambda) ,(lambda)   ,...,(lambda)   .
                                 i         i+1             i+4

          In order to define the full set of B-splines required, eight
          additional knots (lambda) ,(lambda) ,(lambda) ,(lambda)  and
                                   1         2         3         4
          (lambda)   ,(lambda)-  ,(lambda)   ,(lambda)    are inserted
                  n+4         n+5         n+6         n+7
          automatically by the routine. The first four of these are set
          equal to the smallest x  and the last four to the largest x .
                                 r                                   r

          The representation of s(x) in terms of B-splines is the most

          compact form possible in that only n+3 coefficients, in addition
                 

          to the n+7 knots, fully define s(x).

          The method employed involves forming and then computing the
          least-squares solution of a set of m linear equations in the
                                     

          coefficients c  (i=1,2,...,n+3). The equations are formed using a
                        i
          recurrence relation for B-splines that is unconditionally stable
          (Cox [1], de Boor [5]), even for multiple (coincident) knots. The
          least-squares solution is also obtained in a stable manner by
          using orthogonal transformations, viz. a variant of Givens
          rotations (Gentleman [6] and [7]). This requires only one
          equation to be stored at a time. Full advantage is taken of the
          structure of the equations, there being at most four non-zero
          values of N (x) for any value of x and hence at most four
                     i
          coefficients in each equation.

          For further details of the algorithm and its use see Cox [2], [3]
          and [4].

          Subsequent evaluation of s(x) from its B-spline representation
          may be carried out using E02BBF. If derivatives of s(x) are also
          required, E02BCF may be used. E02BDF can be used to compute the
          definite integral of s(x).

          4. References

          [1]   Cox M G (1972) The Numerical Evaluation of B-splines. J.
                Inst. Math. Appl. 10 134--149.

          [2]   Cox M G (1974) A Data-fitting Package for the Non-specialist
                User. Software for Numerical Mathematics. (ed D J Evans)
                Academic Press.

          [3]   Cox M G (1975) Numerical methods for the interpolation and
                approximation of data by spline functions. PhD Thesis. City
                University, London.

          [4]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [5]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [6]   Gentleman W M (1974) Algorithm AS 75. Basic Procedures for
                Large Sparse or Weighted Linear Least-squares Problems.
                Appl. Statist. 23 448--454.

          [7]   Gentleman W M (1973) Least-squares Computations by Givens
                Transformations without Square Roots. J. Inst. Math. Applic.
                12 329--336.

          [8]   Schoenberg I J and Whitney A (1953) On Polya Frequency
                Functions III. Trans. Amer. Math. Soc. 74  246--259.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: the number m of data points. Constraint: M >=
               MDIST >= 4, where MDIST is the number of distinct x values
               in the data.

           2:  NCAP7 -- INTEGER                                       Input
                                    

               On entry: n+7, where n is the number of intervals of the
               spline (which is one greater than the number of interior
               knots, i.e., the knots strictly within the range x  to x )
                                                                 1     m
               over which the spline is defined. Constraint: 8 <= NCAP7 <=
               MDIST + 4, where MDIST is the number of distinct x values in
               the data.

           3:  X(M) -- DOUBLE PRECISION array                         Input
               On entry: the values x  of the independent variable
                                     r
               (abscissa), for r=1,2,...,m. Constraint: x <=x <=...<=x .
                                                         1   2        m

           4:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: the values y  of the dependent variable
                                     r
               (ordinate), for r=1,2,...,m.

           5:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: the values w  of the weights, for r=1,2,...,m.
                                     r
               For advice on the choice of weights, see the Chapter
               Introduction. Constraint: W(r) > 0, for r=1,2,...,m.

           6:  LAMDA(NCAP7) -- DOUBLE PRECISION array          Input/Output
               On entry: LAMDA(i) must be set to the (i-4)th (interior)

               knot, (lambda) , for i=5,6,...,n+3. Constraint: X(1) < LAMDA
                             i
               (5) <= LAMDA(6) <=... <= LAMDA(NCAP7-4) < X(M). On exit: the
               input values are unchanged, and LAMDA(i), for i = 1, 2, 3,
               4, NCAP7-3, NCAP7-2, NCAP7-1, NCAP7 contains the additional
               (exterior) knots introduced by the routine. For advice on
               the choice of knots, see Section 3.3 of the Chapter
               Introduction.

           7:  WORK1(M) -- DOUBLE PRECISION array                 Workspace

           8:  WORK2(4*NCAP7) -- DOUBLE PRECISION array           Workspace

           9:  C(NCAP7) -- DOUBLE PRECISION array                    Output
               On exit: the coefficient c  of the B-spline N (x), for
                                         i                  i
                         

               i=1,2,...,n+3. The remaining elements of the array are not
               used.

          10:  SS -- DOUBLE PRECISION                                Output
               On exit: the residual sum of squares, (theta).

          11:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               The knots fail to satisfy the condition

               X(1) < LAMDA(5) <= LAMDA(6) <=... <= LAMDA(NCAP7-4) < X(M).
               Thus the knots are not in correct order or are not interior
               to the data interval.

          IFAIL= 2
               The weights are not all strictly positive.

          IFAIL= 3
               The values of X(r), for r = 1,2,...,M are not in non-
               decreasing order.

          IFAIL= 4
               NCAP7 < 8 (so the number of interior knots is negative) or
               NCAP7 > MDIST + 4, where MDIST is the number of distinct x
               values in the data (so there cannot be a unique solution).

          IFAIL= 5
               The conditions specified by Schoenberg and Whitney [8] fail
               to hold for at least one subset of the distinct data
               abscissae. That is, there is no subset of NCAP7-4 strictly
               increasing values, X(R(1)),X(R(2)),...,X(R(NCAP7-4)), among
               the abscissae such that
                    X(R(1)) < LAMDA(1) < X(R(5)),

                    X(R(2)) < LAMDA(2) < X(R(6)),

                    ...

                    X(R(NCAP7-8)) < LAMDA(NCAP7-8) < X(R(NCAP7-4)).
               This means that there is no unique solution: there are
               regions containing too many knots compared with the number
               of data points.

          7. Accuracy

          The rounding errors committed are such that the computed
          coefficients are exact for a slightly perturbed set of ordinates
          y +(delta)y . The ratio of the root-mean-square value for the
           r         r
          (delta)y  to the root-mean-square value of the y  can be expected
                  r                                       r
          to be less than a small multiple of (kappa)*m*machine precision,
          where (kappa) is a condition number for the problem. Values of
          (kappa) for 20-30 practical data sets all proved to lie between
          4.5 and 7.8 (see Cox [3]). (Note that for these data sets,
          replacing the coincident end knots at the end-points x  and x
                                                                1      m
          used in the routine by various choices of non-coincident exterior
          knots gave values of (kappa) between 16 and 180. Again see Cox
          [3] for further details.) In general we would not expect (kappa)
          to be large unless the choice of knots results in near-violation
          of the Schoenberg-Whitney conditions.

          A cubic spline which adequately fits the data and is free from
          spurious oscillations is more likely to be obtained if the knots
          are chosen to be grouped more closely in regions where the
          function (underlying the data) or its derivatives change more
          rapidly than elsewhere.

          8. Further Comments

                                                               

          The time taken by the routine is approximately C*(2m+n+7)
          seconds, where C is a machine-dependent constant.

          Multiple knots are permitted as long as their multiplicity does
          not exceed 4, i.e., the complete set of knots must satisfy
                                               

          (lambda) <(lambda)   , for i=1,2,...,n+3, (cf. Section 6). At a
                  i         i+4
          knot of multiplicity one (the usual case), s(x) and its first two
          derivatives are continuous. At a knot of multiplicity two, s(x)
          and its first derivative are continuous. At a knot of
          multiplicity three, s(x) is continuous, and at a knot of
          multiplicity four, s(x) is generally discontinous.

          The routine can be used efficiently for cubic spline
                                   

          interpolation, i.e.,if m=n+3. The abscissae must then of course
          satisfy x <x <...<x . Recommended values for the knots in this
                   1  2      m
                                                 

          case are (lambda) =x   , for i=5,6,...,n+3.
                           i  i-2

          9. Example

          Determine a weighted least-squares cubic spline approximation
          with five intervals (four interior knots) to a set of 14 given
          data points. Tabulate the data and the corresponding values of
          the approximating spline, together with the residual errors, and
          also the values of the approximating spline at points half-way
          between each pair of adjacent data points.

          The example program is written in a general form that will enable
                                                         

          a cubic spline approximation with n intervals (n-1 interior
          knots) to be obtained to m data points, with arbitrary positive
          weights, and the approximation to be tabulated. Note that E02BBF
          is used to evaluate the approximating spline. The program is
          self-starting in that any number of data sets can be supplied.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02bbf}{NAG On-line Documentation: e02bbf}
\beginscroll
\begin{verbatim}



     E02BBF(3NAG)      Foundation Library (12/10/92)      E02BBF(3NAG)



          E02 -- Curve and Surface Fitting                           E02BBF
                  E02BBF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02BBF evaluates a cubic spline from its B-spline representation.

          2. Specification

                 SUBROUTINE E02BBF (NCAP7, LAMDA, C, X, S, IFAIL)
                 INTEGER          NCAP7, IFAIL
                 DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), X, S

          3. Description

          This routine evaluates the cubic spline s(x) at a prescribed
          argument x from its augmented knot set (lambda) , for
                                                         i
                    

          i=1,2,...,n+7, (see E02BAF) and from the coefficients c , for
                                                                 i
          i=1,2,...,q in its B-spline representation

                                        q
                                        --
                                  s(x)= >  c N (x)
                                        --  i i
                                        i=1

                            

          Here q=n+3, where n is the number of intervals of the spline, and
          N (x) denotes the normalised B-spline of degree 3 defined upon
           i
          the knots (lambda) ,(lambda)   ,...,(lambda)   . The prescribed
                            i         i+1             i+4
          argument x must satisfy (lambda) <=x<=(lambda)   .
                                          4             n+4

                                                                   

          It is assumed that (lambda) >=(lambda)   , for j=2,3,...,n+7, and
                                     j          j-1
          (lambda)   >(lambda) .
                              4
                  n+4

          The method employed is that of evaluation by taking convex
          combinations due to de Boor [4]. For further details of the
          algorithm and its use see Cox [1] and [3].

          It is expected that a common use of E02BBF will be the evaluation
          of the cubic spline approximations produced by E02BAF. A
          generalization of E02BBF which also forms the derivative of s(x)
          is E02BCF. E02BCF takes about 50% longer than E02BBF.

          4. References

          [1]   Cox M G (1972) The Numerical Evaluation of B-splines. J.
                Inst. Math. Appl. 10 134--149.

          [2]   Cox M G (1978) The Numerical Evaluation of a Spline from its
                B-spline Representation. J. Inst. Math. Appl. 21 135--143.

          [3]   Cox M G and Hayes J G (1973) Curve fitting: a guide and
                suite of algorithms for the non-specialist user. Report
                NAC26. National Physical Laboratory.

          [4]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          5. Parameters

           1:  NCAP7 -- INTEGER                                       Input
                                    

               On entry: n+7, where n is the number of intervals (one
               greater than the number of interior knots, i.e., the knots
               strictly within the range (lambda)  to (lambda)   ) over
                                                 4            
                                                              n+4
               which the spline is defined. Constraint: NCAP7 >= 8.

           2:  LAMDA(NCAP7) -- DOUBLE PRECISION array                 Input
               On entry: LAMDA(j) must be set to the value of the jth
               member of the complete set of knots, (lambda)  for
                                                            j
                         

               j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non-
               decreasing order with LAMDA(NCAP7-3) > LAMDA(4).

           3:  C(NCAP7) -- DOUBLE PRECISION array                     Input
               On entry: the coefficient c  of the B-spline N (x), for
                                          i                  i
                         

               i=1,2,...,n+3. The remaining elements of the array are not
               used.

           4:  X -- DOUBLE PRECISION                                  Input
               On entry: the argument x at which the cubic spline is to be
               evaluated. Constraint: LAMDA(4) <= X <= LAMDA(NCAP7-3).

           5:  S -- DOUBLE PRECISION                                 Output
               On exit: the value of the spline, s(x).

           6:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               The argument X does not satisfy LAMDA(4) <= X <= LAMDA(
               NCAP7-3).

               In this case the value of S is set arbitrarily to zero.

          IFAIL= 2
               NCAP7 < 8, i.e., the number of interior knots is negative.

          7. Accuracy

          The computed value of s(x) has negligible error in most practical
          situations. Specifically, this value has an absolute error
          bounded in modulus by 18*c   * machine precision, where c    is
                                    max                            max
          the largest in modulus of c ,c   ,c    and c   , and j is an
                                     j  j+1  j+2      j+3
          integer such that (lambda)   <=x<=(lambda)   . If c ,c   ,c
                                    j+3             j+4      j  j+1  j+2
          and c    are all of the same sign, then the computed value of
               j+3
          s(x) has a relative error not exceeding 20*machine precision in
          modulus. For further details see Cox [2].

          8. Further Comments

                                                                      

          The time taken by the routine is approximately C*(1+0.1*log(n+7))
          seconds, where C is a machine-dependent constant.

          Note: the routine does not test all the conditions on the knots
          given in the description of LAMDA in Section 5, since to do this
          would result in a computation time approximately linear in n+7
          instead of log(n+7). All the conditions are tested in E02BAF,
          however.

          9. Example

          Evaluate at 9 equally-spaced points in the interval 1.0<=x<=9.0
          the cubic spline with (augmented) knots 1.0, 1.0, 1.0, 1.0, 3.0,
          6.0, 8.0, 9.0, 9.0, 9.0, 9.0 and normalised cubic B-spline
          coefficients 1.0, 2.0, 4.0, 7.0, 6.0, 4.0, 3.0.

          The example program is written in a general form that will enable
                              

          a cubic spline with n intervals, in its normalised cubic B-spline
          form, to be evaluated at m equally-spaced points in the interval
                                 

          LAMDA(4) <= x <= LAMDA(n+4). The program is self-starting in that
          any number of data sets may be supplied.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02bcf}{NAG On-line Documentation: e02bcf}
\beginscroll
\begin{verbatim}



     E02BCF(3NAG)      Foundation Library (12/10/92)      E02BCF(3NAG)



          E02 -- Curve and Surface Fitting                           E02BCF
                  E02BCF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02BCF evaluates a cubic spline and its first three derivatives
          from its B-spline representation.

          2. Specification

                 SUBROUTINE E02BCF (NCAP7, LAMDA, C, X, LEFT, S, IFAIL)
                 INTEGER          NCAP7, LEFT, IFAIL
                 DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), X, S(4)

          3. Description

          This routine evaluates the cubic spline s(x) and its first three
          derivatives at a prescribed argument x. It is assumed that s(x)
          is represented in terms of its B-spline coefficients c , for
                                                                i
                    

          i=1,2,...,n+3 and (augmented) ordered knot set (lambda) , for
                                                                 i
                    

          i=1,2,...,n+7, (see E02BAF), i.e.,

                                        q
                                        --
                                  s(x)= >  c N (x)
                                        --  i i
                                        i=1

                      

          Here q=n+3, n is the number of intervals of the spline and N (x)
                                                                      i
          denotes the normalised B-spline of degree 3 (order 4) defined
          upon the knots (lambda) ,(lambda)   ,...,(lambda)   . The
                                 i         i+1             i+4
          prescribed argument x must satisfy

                              (lambda) <=x<=(lambda)
                                      4             n+4

          At a simple knot (lambda)  (i.e., one satisfying
                                   i
          (lambda)   <(lambda) <(lambda)   ), the third derivative of the
                  i-1         i         i+1
          spline is in general discontinuous. At a multiple knot (i.e., two
          or more knots with the same value), lower derivatives, and even
          the spline itself, may be discontinuous. Specifically, at a point
          x=u where (exactly) r knots coincide (such a point is termed a
          knot of multiplicity r), the values of the derivatives of order
          4-j, for j=1,2,...,r, are in general discontinuous. (Here
          1<=r<=4;r>4 is not meaningful.) The user must specify whether the
          value at such a point is required to be the left- or right-hand
          derivative.

          The method employed is based upon:

               (i) carrying out a binary search for the knot interval
               containing the argument x (see Cox [3]),

               (ii) evaluating the non-zero B-splines of orders 1,2,3 and
               4 by recurrence (see Cox [2] and [3]),

               (iii) computing all derivatives of the B-splines of order 4
               by applying a second recurrence to these computed B-spline
               values (see de Boor [1]),

               (iv) multiplying the 4th-order B-spline values and their
               derivative by the appropriate B-spline coefficients, and
               summing, to yield the values of s(x) and its derivatives.

          E02BCF can be used to compute the values and derivatives of cubic
          spline fits and interpolants produced by E02BAF.

          If only values and not derivatives are required, E02BBF may be
          used instead of E02BCF, which takes about 50% longer than E02BBF.

          4. References

          [1]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [2]   Cox M G (1972) The Numerical Evaluation of B-splines. J.
                Inst. Math. Appl. 10 134--149.

          [3]   Cox M G (1978) The Numerical Evaluation of a Spline from its
                B-spline Representation. J. Inst. Math. Appl. 21 135--143.

          5. Parameters

           1:  NCAP7 -- INTEGER                                       Input
                                    

               On entry: n+7, where n is the number of intervals of the
               spline (which is one greater than the number of interior
               knots, i.e., the knots strictly within the range (lambda)
                                                                        4
               to (lambda)    over which the spline is defined).
                          n+4
               Constraint: NCAP7 >= 8.

           2:  LAMDA(NCAP7) -- DOUBLE PRECISION array                 Input
               On entry: LAMDA(j) must be set to the value of the jth
               member of the complete set of knots, (lambda) , for
                                                            j
                         

               j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non-
               decreasing order with

               LAMDA(NCAP7-3) > LAMDA(4).

           3:  C(NCAP7) -- DOUBLE PRECISION array                     Input
               On entry: the coefficient c  of the B-spline N (x), for
                                          i                  i

               i=1,2,...,n+3. The remaining elements of the array are not
               used.

           4:  X -- DOUBLE PRECISION                                  Input
               On entry: the argument x at which the cubic spline and its
               derivatives are to be evaluated. Constraint: LAMDA(4) <= X
               <= LAMDA(NCAP7-3).

           5:  LEFT -- INTEGER                                        Input
               On entry: specifies whether left- or right-hand values of
               the spline and its derivatives are to be computed (see
               Section 3). Left- or right-hand values are formed according
               to whether LEFT is equal or not equal to 1. If x does not
               coincide with a knot, the value of LEFT is immaterial. If x
               = LAMDA(4), right-hand values are computed, and if x = LAMDA
               (NCAP7-3), left-hand values are formed, regardless of the
               value of LEFT.

           6:  S(4) -- DOUBLE PRECISION array                        Output
               On exit: S(j) contains the value of the (j-1)th derivative
               of the spline at the argument x, for j = 1,2,3,4. Note that
               S(1) contains the value of the spline.

           7:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               NCAP7 < 8, i.e., the number of intervals is not positive.

          IFAIL= 2
               Either LAMDA(4) >= LAMDA(NCAP7-3), i.e., the range over
               which s(x) is defined is null or negative in length, or X is
               an invalid argument, i.e., X < LAMDA(4) or X >
               LAMDA(NCAP7-3).

          7. Accuracy

          The computed value of s(x) has negligible error in most practical
          situations. Specifically, this value has an absolute error
          bounded in modulus by 18*c   * machine precision, where c    is
                                    max                            max
          the largest in modulus of c ,c   ,c    and c   , and j is an
                                     j  j+1  j+2      j+3
          integer such that (lambda)   <=x<=(lambda)   . If c ,c   ,c
                                    j+3             j+4      j  j+1  j+2
          and c    are all of the same sign, then the computed value of
               j+3
          s(x) has relative error bounded by 18*machine precision. For full
          details see Cox [3].

          No complete error analysis is available for the computation of
          the derivatives of s(x). However, for most practical purposes the
          absolute errors in the computed derivatives should be small.

          8. Further Comments

          The time taken by this routine is approximately linear in
              

          log(n+7).

          Note: the routine does not test all the conditions on the knots
          given in the description of LAMDA in Section 5, since to do this
                                                                     

          would result in a computation time approximately linear in n+7
                         

          instead of log(n+7). All the conditions are tested in E02BAF,
          however.

          9. Example

          Compute, at the 7 arguments x = 0, 1, 2, 3, 4, 5, 6, the left-
          and right-hand values and first 3 derivatives of the cubic spline
          defined over the interval 0<=x<=6 having the 6 interior knots x =
          1, 3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6,
          and the 10 B-spline coefficients 10, 12, 13, 15, 22, 26, 24, 18,
          14, 12.

          The input data items (using the notation of Section 5) comprise
          the following values in the order indicated:

              

              n           m

              LAMDA(j),   for j= 1,2,...,NCAP7

              C(j),       for j= 1,2,...,NCAP7-4

              x(i),       for i=1,2,...,m

          The example program is written in a general form that will enable
          the values and derivatives of a cubic spline having an arbitrary
          number of knots to be evaluated at a set of arbitrary points. Any
          number of data sets may be supplied. The only changes required to
          the program relate to the dimensions of the arrays LAMDA and C.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02bdf}{NAG On-line Documentation: e02bdf}
\beginscroll
\begin{verbatim}



     E02BDF(3NAG)      Foundation Library (12/10/92)      E02BDF(3NAG)



          E02 -- Curve and Surface Fitting                           E02BDF
                  E02BDF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02BDF computes the definite integral of a cubic spline from its
          B-spline representation.

          2. Specification

                 SUBROUTINE E02BDF (NCAP7, LAMDA, C, DEFINT, IFAIL)
                 INTEGER          NCAP7, IFAIL
                 DOUBLE PRECISION LAMDA(NCAP7), C(NCAP7), DEFINT

          3. Description

          This routine computes the definite integral of the cubic spline
          s(x) between the limits x=a and x=b, where a and b are
          respectively the lower and upper limits of the range over which
          s(x) is defined. It is assumed that s(x) is represented in terms
                                                         

          of its B-spline coefficients c , for i=1,2,...,n+3 and
                                        i
                                                                

          (augmented) ordered knot set (lambda) , for i=1,2,...,n+7, with
                                               i
          (lambda) =a, for i = 1,2,3,4 and (lambda) =b, for
                  i                                i
                        

          i=n+4,n+5,n+6,n+7, (see E02BAF), i.e.,

                                        q
                                        --
                                  s(x)= >  c N (x).
                                        --  i i
                                        i=1

                      

          Here q=n+3, n is the number of intervals of the spline and N (x)
                                                                      i
          denotes the normalised B-spline of degree 3 (order 4) defined
          upon the knots (lambda) ,(lambda)   ,...,(lambda)   .
                                 i         i+1             i+4

          The method employed uses the formula given in Section 3 of Cox
          [1].

          E02BDF can be used to determine the definite integrals of cubic
          spline fits and interpolants produced by E02BAF.

          4. References

          [1]   Cox M G (1975) An Algorithm for Spline Interpolation. J.
                Inst. Math. Appl. 15 95--108.

          5. Parameters

           1:  NCAP7 -- INTEGER                                       Input
                                    

               On entry: n+7, where n is the number of intervals of the
               spline (which is one greater than the number of interior
               knots, i.e., the knots strictly within the range a to b)
               over which the spline is defined. Constraint: NCAP7 >= 8.

           2:  LAMDA(NCAP7) -- DOUBLE PRECISION array                 Input
               On entry: LAMDA(j) must be set to the value of the jth
               member of the complete set of knots, (lambda)  for
                                                            j
                         

               j=1,2,...,n+7. Constraint: the LAMDA(j) must be in non-
               decreasing order with LAMDA(NCAP7-3) > LAMDA(4) and satisfy
                           LAMDA(1)=LAMDA(2)=LAMDA(3)=LAMDA(4)
               and

               LAMDA(NCAP7-3)=LAMDA(NCAP7-2)=LAMDA(NCAP7-1)=LAMDA(NCAP7).

           3:  C(NCAP7) -- DOUBLE PRECISION array                     Input
               On entry: the coefficient c  of the B-spline N (x), for
                                          i                  i
                         

               i=1,2,...,n+3. The remaining elements of the array are not
               used.

           4:  DEFINT -- DOUBLE PRECISION                            Output
               On exit: the value of the definite integral of s(x) between
               the limits x=a and x=b, where a=(lambda)  and b=(lambda)   .
                                                       4               n+4

           5:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               NCAP7 < 8, i.e., the number of intervals is not positive.

          IFAIL= 2
               At least one of the following restrictions on the knots is
               violated:
                    LAMDA(NCAP7-3) > LAMDA(4),

                    LAMDA(j) >= LAMDA(j-1),
               for j = 2,3,...,NCAP7, with equality in the cases
               j=2,3,4,NCAP7-2,NCAP7-1, and NCAP7.

          7. Accuracy

          The rounding errors are such that the computed value of the
          integral is exact for a slightly perturbed set of B-spline
          coefficients c  differing in a relative sense from those supplied
                        i      
          by no more than 2.2*(n+3)*machine precision.

          8. Further Comments

          The time taken by the routine is approximately proportional to
          

          n+7.

          9. Example

          Determine the definite integral over the interval 0<=x<=6 of a
          cubic spline having 6 interior knots at the positions (lambda)=1,
          3, 3, 3, 4, 4, the 8 additional knots 0, 0, 0, 0, 6, 6, 6, 6, and
          the 10 B-spline coefficients 10, 12, 13, 15, 22, 26, 24, 18, 14,
          12.

          The input data items (using the notation of Section 5) comprise
          the following values in the order indicated:

              

              n

              LAMDA(j) for j = 1,2,...,NCAP7
              ,

              C(j),    for j = 1,2,...,NCAP7-3

          The example program is written in a general form that will enable
          the definite integral of a cubic spline having an arbitrary
          number of knots to be computed. Any number of data sets may be
          supplied. The only changes required to the program relate to the
          dimensions of the arrays LAMDA and C.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02bef}{NAG On-line Documentation: e02bef}
\beginscroll
\begin{verbatim}



     E02BEF(3NAG)      Foundation Library (12/10/92)      E02BEF(3NAG)



          E02 -- Curve and Surface Fitting                           E02BEF
                  E02BEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02BEF computes a cubic spline approximation to an arbitrary set
          of data points. The knots of the spline are located
          automatically, but a single parameter must be specified to
          control the trade-off between closeness of fit and smoothness of
          fit.

          2. Specification

                 SUBROUTINE E02BEF (START, M, X, Y, W, S, NEST, N, LAMDA,
                1                   C, FP, WRK, LWRK, IWRK, IFAIL)
                 INTEGER          M, NEST, N, LWRK, IWRK(NEST), IFAIL
                 DOUBLE PRECISION X(M), Y(M), W(M), S, LAMDA(NEST), C(NEST),
                1                 FP, WRK(LWRK)
                 CHARACTER*1      START

          3. Description

          This routine determines a smooth cubic spline approximation s(x)
          to the set of data points (x ,y ), with weights w , for
                                      r  r                 r
          r=1,2,...,m.

          The spline is given in the B-spline representation

                                      n-4
                                      --
                                s(x)= >  c N (x)                        (1)
                                      --  i i
                                      i=1

          where N (x) denotes the normalised cubic B-spline defined upon
                 i
          the knots (lambda) ,(lambda)   ,...,(lambda)   .
                            i         i+1             i+4

          The total number n of these knots and their values
          (lambda) ,...,(lambda)  are chosen automatically by the routine.
                  1             n
          The knots (lambda) ,...,(lambda)    are the interior knots; they
                            5             n-4
          divide the approximation interval [x ,x ] into n-7 sub-intervals.
                                              1  m
          The coefficients c ,c ,...,c    are then determined as the
                            1  2      n-4
          solution of the following constrained minimization problem:

          minimize

                                      n-4
                                      --        2
                               (eta)= >  (delta)                        (2)
                                      --        i
                                      i=5

          subject to the constraint

                                    m
                                    --          2
                           (theta)= >  (epsilon) <=S                    (3)
                                    --          r
                                    r=1

          where: (delta)    stands for the discontinuity jump in the third
                        i   order derivative of s(x) at the interior knot
                      (lambda) ,
                                    i

                 (epsilon)  denotes the weighted residual w (y -s(x )),
                          r                                r  r    r

          and    S          is a non-negative number to be specified by
                            the user.

          The quantity (eta) can be seen as a measure of the (lack of)
          smoothness of s(x), while closeness of fit is measured through
          (theta). By means of the parameter S, 'the smoothing factor', the
          user will then control the balance between these two (usually
          conflicting) properties. If S is too large, the spline will be
          too smooth and signal will be lost (underfit); if S is too small,
          the spline will pick up too much noise (overfit). In the extreme
          cases the routine will return an interpolating spline ((theta)=0)
          if S is set to zero, and the weighted least-squares cubic
          polynomial ((eta)=0) if S is set very large. Experimenting with S
          values between these two extremes should result in a good
          compromise. (See Section 8.2 for advice on choice of S.)

          The method employed is outlined in Section 8.3 and fully
          described in Dierckx [1], [2] and [3]. It involves an adaptive
          strategy for locating the knots of the cubic spline (depending on
          the function underlying the data and on the value of S), and an
          iterative method for solving the constrained minimization problem
          once the knots have been determined.

          Values of the computed spline, or of its derivatives or definite
          integral, can subsequently be computed by calling E02BBF, E02BCF
          or E02BDF, as described in Section 8.4.

          4. References

          [1]   Dierckx P (1975) An Algorithm for Smoothing, Differentiating
                and Integration of Experimental Data Using Spline Functions.
                J. Comput. Appl. Math. 1 165--184.

          [2]   Dierckx P (1982) A Fast Algorithm for Smoothing Data on a
                Rectangular Grid while using Spline Functions. SIAM J.
                Numer. Anal. 19 1286--1304.

          [3]   Dierckx P (1981) An Improved Algorithm for Curve Fitting
                with Spline Functions. Report TW54. Department of Computer
                Science, Katholieke Universiteit Leuven.

          [4]   Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
                10 177--183.

          5. Parameters

           1:  START -- CHARACTER*1                                   Input
               On entry: START must be set to 'C' or 'W'.

               If START = 'C' (Cold start), the routine will build up the
               knot set starting with no interior knots. No values need be
               assigned to the parameters N, LAMDA, WRK or IWRK.

               If START = 'W' (Warm start), the routine will restart the
               knot-placing strategy using the knots found in a previous
               call of the routine. In this case, the parameters N, LAMDA,
               WRK, and IWRK must be unchanged from that previous call.
               This warm start can save much time in searching for a
               satisfactory value of S. Constraint: START = 'C' or 'W'.

           2:  M -- INTEGER                                           Input
               On entry: m, the number of data points. Constraint: M >= 4.

           3:  X(M) -- DOUBLE PRECISION array                         Input
               On entry: the values x  of the independent variable
                                     r
               (abscissa) x, for r=1,2,...,m. Constraint: x <x <...<x
                                                           1  2      m

           4:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: the values y  of the dependent variable
                                     r
               (ordinate) y, for r=1,2,...,m.

           5:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: the values w  of the weights, for r=1,2,...,m.
                                     r
               For advice on the choice of weights, see the Chapter
               Introduction, Section 2.1.2. Constraint: W(r) > 0, for
               r=1,2,...,m.

           6:  S -- DOUBLE PRECISION                                  Input
               On entry: the smoothing factor, S.

               If S=0.0, the routine returns an interpolating spline.

               If S is smaller than machine precision, it is assumed equal
               to zero.

               For advice on the choice of S, see Section 3 and Section 8.2
               Constraint: S >= 0.0.

           7:  NEST -- INTEGER                                        Input
               On entry: an over-estimate for the number, n, of knots
               required. Constraint: NEST >= 8. In most practical
               situations, NEST = M/2 is sufficient. NEST never needs to be
               larger than M + 4, the number of knots needed for
               interpolation (S = 0.0).

           8:  N -- INTEGER                                    Input/Output
               On entry: if the warm start option is used, the value of N
               must be left unchanged from the previous call. On exit: the
               total number, n, of knots of the computed spline.

           9:  LAMDA(NEST) -- DOUBLE PRECISION array           Input/Output
               On entry: if the warm start option is used, the values
               LAMDA(1), LAMDA(2),...,LAMDA(N) must be left unchanged from
               the previous call. On exit: the knots of the spline i.e.,
               the positions of the interior knots LAMDA(5), LAMDA(6),...
               ,LAMDA(N-4) as well as the positions of the additional knots
               LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA(4) = x  and
                                                            1

               LAMDA(N-3) = LAMDA(N-2) = LAMDA(N-1) = LAMDA(N) = x  needed
                                                                  m
               for the B-spline representation.

          10:  C(NEST) -- DOUBLE PRECISION array                     Output
               On exit: the coefficient c  of the B-spline N (x) in the
                                          i                  i
               spline approximation s(x), for i=1,2,...,n-4.

          11:  FP -- DOUBLE PRECISION                                Output
               On exit: the sum of the squared weighted residuals, (theta),
               of the computed spline approximation. If FP = 0.0, this is
               an interpolating spline. FP should equal S within a relative
               tolerance of 0.001 unless n=8 when the spline has no
               interior knots and so is simply a cubic polynomial. For
               knots to be inserted, S must be set to a value below the
               value of FP produced in this case.

          12:  WRK(LWRK) -- DOUBLE PRECISION array                Workspace
               On entry: if the warm start option is used, the values WRK
               (1),...,WRK(n) must be left unchanged from the previous
               call.

          13:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E02BEF is called.
               Constraint: LWRK>=4*M+16*NEST+41.

          14:  IWRK(NEST) -- INTEGER array                        Workspace
               On entry: if the warm start option is used, the values IWRK
               (1), ..., IWRK(n) must be left unchanged from the previous
               call.

               This array is used as workspace.

          15:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry START /= 'C' or 'W',

               or       M < 4,

               or       S < 0.0,

               or       S = 0.0 and NEST < M + 4,

               or       NEST < 8,

               or       LWRK<4*M+16*NEST+41.

          IFAIL= 2
               The weights are not all strictly positive.

          IFAIL= 3
               The values of X(r), for r=1,2,...,M, are not in strictly
               increasing order.

          IFAIL= 4
               The number of knots required is greater than NEST. Try
               increasing NEST and, if necessary, supplying larger arrays
               for the parameters LAMDA, C, WRK and IWRK. However, if NEST
               is already large, say NEST > M/2, then this error exit may
               indicate that S is too small.

          IFAIL= 5
               The iterative process used to compute the coefficients of
               the approximating spline has failed to converge. This error
               exit may occur if S has been set very small. If the error
               persists with increased S, consult NAG.

          If IFAIL = 4 or 5, a spline approximation is returned, but it
          fails to satisfy the fitting criterion (see (2) and (3) in
          Section 3) - perhaps by only a small amount, however.

          7. Accuracy

          On successful exit, the approximation returned is such that its
          weighted sum of squared residuals FP is equal to the smoothing
          factor S, up to a specified relative tolerance of 0.001 - except
          that if n=8, FP may be significantly less than S: in this case
          the computed spline is simply a weighted least-squares polynomial
          approximation of degree 3, i.e., a spline with no interior knots.

          8. Further Comments

          8.1. Timing

          The time taken for a call of E02BEF depends on the complexity of
          the shape of the data, the value of the smoothing factor S, and
          the number of data points. If E02BEF is to be called for
          different values of S, much time can be saved by setting START =

          8.2. Choice of S

          If the weights have been correctly chosen (see Section 2.1.2 of
          the Chapter Introduction), the standard deviation of w y  would
                                                                r r
          be the same for all r, equal to (sigma), say. In this case,
                                                              2      
          choosing the smoothing factor S in the range (sigma) (m+-\/2m),
          as suggested by Reinsch [4], is likely to give a good start in
          the search for a satisfactory value. Otherwise, experimenting
          with different values of S will be required from the start,
          taking account of the remarks in Section 3.

          In that case, in view of computation time and memory
          requirements, it is recommended to start with a very large value
          for S and so determine the least-squares cubic polynomial; the
          value returned for FP, call it FP , gives an upper bound for S.
                                           0
          Then progressively decrease the value of S to obtain closer fits
          - say by a factor of 10 in the beginning, i.e., S=FP /10, S=FP
                                                              0         0
          /100, and so on, and more carefully as the approximation shows
          more details.

          The number of knots of the spline returned, and their location,
          generally depend on the value of S and on the behaviour of the
          function underlying the data. However, if E02BEF is called with
          START = 'W', the knots returned may also depend on the smoothing
          factors of the previous calls. Therefore if, after a number of
          trials with different values of S and START = 'W', a fit can
          finally be accepted as satisfactory, it may be worthwhile to call
          E02BEF once more with the selected value for S but now using
          START = 'C'. Often, E02BEF then returns an approximation with the
          same quality of fit but with fewer knots, which is therefore
          better if data reduction is also important.

          8.3. Outline of Method Used

          If S=0, the requisite number of knots is known in advance, i.e.,
          n=m+4; the interior knots are located immediately as (lambda)  =
                                                                       i
          x   , for i=5,6,...,n-4. The corresponding least-squares spline
           i-2
          (see E02BAF) is then an interpolating spline and therefore a
          solution of the problem.

          If S>0, a suitable knot set is built up in stages (starting with
          no interior knots in the case of a cold start but with the knot
          set found in a previous call if a warm start is chosen). At each
          stage, a spline is fitted to the data by least-squares (see
          E02BAF) and (theta), the weighted sum of squares of residuals, is
          computed. If (theta)>S, new knots are added to the knot set to
          reduce (theta) at the next stage. The new knots are located in
          intervals where the fit is particularly poor, their number
          depending on the value of S and on the progress made so far in
          reducing (theta). Sooner or later, we find that (theta)<=S and at
          that point the knot set is accepted. The routine then goes on to
          compute the (unique) spline which has this knot set and which
          satisfies the full fitting criterion specified by (2) and (3).
          The theoretical solution has (theta)=S. The routine computes the
          spline by an iterative scheme which is ended when (theta)=S
          within a relative tolerance of 0.001. The main part of each
          iteration consists of a linear least-squares computation of
          special form, done in a similarly stable and efficient manner as
          in E02BAF.

          An exception occurs when the routine finds at the start that,
          even with no interior knots (n=8), the least-squares spline
          already has its weighted sum of squares of residuals <=S. In this
          case, since this spline (which is simply a cubic polynomial) also
          has an optimal value for the smoothness measure (eta), namely
          zero, it is returned at once as the (trivial) solution. It will
          usually mean that S has been chosen too large.

          For further details of the algorithm and its use, see Dierckx [3]

          8.4. Evaluation of Computed Spline

          The value of the computed spline at a given value X may be
          obtained in the double precision variable S by the call:


                 CALL E02BBF(N,LAMDA,C,X,S,IFAIL)

          where N, LAMDA and C are the output parameters of E02BEF.

          The values of the spline and its first three derivatives at a
          given value X may be obtained in the double precision array SDIF
          of dimension at least 4 by the call:

                 CALL E02BCF(N,LAMDA,C,X,LEFT,SDIF,IFAIL)

          where if LEFT = 1, left-hand derivatives are computed and if LEFT
          /= 1, right-hand derivatives are calculated. The value of LEFT is
          only relevant if X is an interior knot.

          The value of the definite integral of the spline over the
          interval X(1) to X(M) can be obtained in the double precision
          variable SINT by the call:

                 CALL E02BDF(N,LAMDA,C,SINT,IFAIL)

          9. Example

          This example program reads in a set of data values, followed by a
          set of values of S. For each value of S it calls E02BEF to
          compute a spline approximation, and prints the values of the
          knots and the B-spline coefficients c .
                                               i

          The program includes code to evaluate the computed splines, by
          calls to E02BBF, at the points x  and at points mid-way between
                                          r
          them. These values are not printed out, however; instead the
          results are illustrated by plots of the computed splines,
          together with the data points (indicated by *) and the positions
          of the knots (indicated by vertical lines): the effect of
          decreasing S can be clearly seen. (The plots were obtained by
          calling NAG Graphical Supplement routine J06FAF(*).)


                   Please see figures in printed Reference Manual

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02daf}{NAG On-line Documentation: e02daf}
\beginscroll
\begin{verbatim}



     E02DAF(3NAG)      Foundation Library (12/10/92)      E02DAF(3NAG)



          E02 -- Curve and Surface Fitting                           E02DAF
                  E02DAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02DAF forms a minimal, weighted least-squares bicubic spline
          surface fit with prescribed knots to a given set of data points.

          2. Specification

                 SUBROUTINE E02DAF (M, PX, PY, X, Y, F, W, LAMDA, MU,
                1                   POINT, NPOINT, DL, C, NC, WS, NWS, EPS,
                2                   SIGMA, RANK, IFAIL)
                 INTEGER          M, PX, PY, POINT(NPOINT), NPOINT, NC, NWS,
                1                 RANK, IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), W(M), LAMDA(PX), MU(PY),
                1                 DL(NC), C(NC), WS(NWS), EPS, SIGMA

          3. Description

          This routine determines a bicubic spline fit s(x,y) to the set of
          data points (x ,y ,f ) with weights w , for r=1,2,...,m. The two
                        r  r  r                r
          sets of internal knots of the spline, {(lambda)} and {(mu)},
          associated with the variables x and y respectively, are
          prescribed by the user. These knots can be thought of as dividing
          the data region of the (x,y) plane into panels (see diagram in
          Section 5). A bicubic spline consists of a separate bicubic
          polynomial in each panel, the polynomials joining together with
          continuity up to the second derivative across the panel
          boundaries.

          s(x,y) has the property that (Sigma), the sum of squares of its
          weighted residuals (rho) , for r=1,2,...,m, where
                                  r

                            (rho) =w (s(x ,y )-f ),                     (1)
                                 r  r    r  r   r

          is as small as possible for a bicubic spline with the given knot
          sets. The routine produces this minimized value of (Sigma) and
          the coefficients c   in the B-spline representation of s(x,y) -
                            ij
          see Section 8. E02DEF and E02DFF are available to compute values
          of the fitted spline from the coefficients c  .
                                                      ij

          The least-squares criterion is not always sufficient to determine
          the bicubic spline uniquely: there may be a whole family of
          splines which have the same minimum sum of squares. In these
          cases, the routine selects from this family the spline for which
          the sum of squares of the coefficients c   is smallest: in other
                                                  ij
          words, the minimal least-squares solution. This choice, although
          arbitrary, reduces the risk of unwanted fluctuations in the
          spline fit. The method employed involves forming a system of m
          linear equations in the coefficients c   and then computing its
                                                ij
          least-squares solution, which will be the minimal least-squares
          solution when appropriate. The basis of the method is described
          in Hayes and Halliday [4]. The matrix of the equation is formed
          using a recurrence relation for B-splines which is numerically
          stable (see Cox [1] and de Boor [2] - the former contains the
          more elementary derivation but, unlike [2], does not cover the
          case of coincident knots). The least-squares solution is also
          obtained in a stable manner by using orthogonal transformations,
          viz. a variant of Givens rotation (see Gentleman [3]). This
          requires only one row of the matrix to be stored at a time.
          Advantage is taken of the stepped-band structure which the matrix
          possesses when the data points are suitably ordered, there being
          at most sixteen non-zero elements in any row because of the
          definition of B-splines. First the matrix is reduced to upper
          triangular form and then the diagonal elements of this triangle
          are examined in turn. When an element is encountered whose
          square, divided by the mean squared weight, is less than a
          threshold (epsilon), it is replaced by zero and the rest of the
          elements in its row are reduced to zero by rotations with the
          remaining rows. The rank of the system is taken to be the number
          of non-zero diagonal elements in the final triangle, and the non-
          zero rows of this triangle are used to compute the minimal least-
          squares solution. If all the diagonal elements are non-zero, the
          rank is equal to the number of coefficients c   and the solution
                                                       ij
          obtained is the ordinary least-squares solution, which is unique
          in this case.

          4. References

          [1]   Cox M G (1972) The Numerical Evaluation of B-splines. J.
                Inst. Math. Appl. 10 134--149.

          [2]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [3]   Gentleman W M (1973) Least-squares Computations by Givens
                Transformations without Square Roots. J. Inst. Math. Applic.
                12 329--336.

          [4]   Hayes J G and Halliday J (1974) The Least-squares Fitting of
                Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
                Appl. 14 89--103.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: the number of data points, m. Constraint: M > 1.

           2:  PX -- INTEGER                                          Input

           3:  PY -- INTEGER                                          Input
               On entry: the total number of knots (lambda) and (mu)
               associated with the variables x and y, respectively.
               Constraint: PX >= 8 and PY >= 8.

               (They are such that PX-8 and PY-8 are the corresponding
               numbers of interior knots.) The running time and storage
               required by the routine are both minimized if the axes are
               labelled so that PY is the smaller of PX and PY.

           4:  X(M) -- DOUBLE PRECISION array                         Input

           5:  Y(M) -- DOUBLE PRECISION array                         Input

           6:  F(M) -- DOUBLE PRECISION array                         Input
               On entry: the co-ordinates of the data point (x ,y ,f ), for
                                                              r  r  r
               r=1,2,...,m. The order of the data points is immaterial, but
               see the array POINT, below.

           7:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: the weight w  of the rth data point. It is
                                     r
               important to note the definition of weight implied by the
               equation (1) in Section 3, since it is also common usage to
               define weight as the square of this weight. In this routine,
               each w  should be chosen inversely proportional to the
                     r
               (absolute) accuracy of the corresponding f , as expressed,
                                                         r
               for example, by the standard deviation or probable error of
               the f . When the f  are all of the same accuracy, all the w
                    r            r                                        r
               may be set equal to 1.0.

           8:  LAMDA(PX) -- DOUBLE PRECISION array             Input/Output
               On entry: LAMDA(i+4) must contain the ith interior knot
               (lambda)     associated with the variable x, for
                       i+4
               i=1,2,...,PX-8. The knots must be in non-decreasing order
               and lie strictly within the range covered by the data values
               of x. A knot is a value of x at which the spline is allowed
               to be discontinuous in the third derivative with respect to
               x, though continuous up to the second derivative. This
               degree of continuity can be reduced, if the user requires,
               by the use of coincident knots, provided that no more than
               four knots are chosen to coincide at any point. Two, or
               three, coincident knots allow loss of continuity in,
               respectively, the second and first derivative with respect
               to x at the value of x at which they coincide. Four
               coincident knots split the spline surface into two
               independent parts. For choice of knots see Section 8. On
               exit: the interior knots LAMDA(5) to LAMDA(PX-4) are
               unchanged, and the segments LAMDA(1:4) and LAMDA(PX-3:PX)
               contain additional (exterior) knots introduced by the
               routine in order to define the full set of B-splines
               required. The four knots in the first segment are all set
               equal to the lowest data value of x and the other four
               additional knots are all set equal to the highest value:
               there is experimental evidence that coincident end-knots are
               best for numerical accuracy. The complete array must be left
               undisturbed if E02DEF or E02DFF is to be used subsequently.

           9:  MU(PY) -- DOUBLE PRECISION array                       Input
               On entry: MU(i+4) must contain the ith interior knot (mu)
                                                                        i+4
               associated with the variable y, i=1,2,...,PY-8. The same
               remarks apply to MU as to LAMDA above, with Y replacing X,
               and y replacing x.

          10:  POINT(NPOINT) -- INTEGER array                         Input
               On entry: indexing information usually provided by E02ZAF
               which enables the data points to be accessed in the order
               which produces the advantageous matrix structure mentioned
               in Section 3. This order is such that, if the (x,y) plane is
               thought of as being divided into rectangular panels by the
               two sets of knots, all data in a panel occur before data in
               succeeding panels, where the panels are numbered from bottom
               to top and then left to right with the usual arrangement of
               axes, as indicated in the diagram.

                      Please see figure in printed Reference Manual

               A data point lying exactly on one or more panel sides is
               considered to be in the highest numbered panel adjacent to
               the point. E02ZAF should be called to obtain the array
               POINT, unless it is provided by other means.

          11:  NPOINT -- INTEGER                                      Input
               On entry:
               the dimension of the array POINT as declared in the
               (sub)program from which E02DAF is called.
               Constraint: NPOINT >= M + (PX-7)*(PY-7).

          12:  DL(NC) -- DOUBLE PRECISION array                      Output
               On exit: DL gives the squares of the diagonal elements of
               the reduced triangular matrix, divided by the mean squared
               weight. It includes those elements, less than (epsilon),
               which are treated as zero (see Section 3).

          13:  C(NC) -- DOUBLE PRECISION array                       Output
               On exit: C gives the coefficients of the fit. C((PY-4)*(i-
               1)+j) is the coefficient c   of Section 3 and Section 8 for
                                         ij
               i=1,2,...,PX-4 and j=1,2,...,PY-4. These coefficients are
               used by E02DEF or E02DFF to calculate values of the fitted
               function.

          14:  NC -- INTEGER                                          Input
               On entry: the value (PX-4)*(PY-4).

          15:  WS(NWS) -- DOUBLE PRECISION array                  Workspace

          16:  NWS -- INTEGER                                         Input
               On entry:
               the dimension of the array WS as declared in the
               (sub)program from which E02DAF is called.
               Constraint: NWS>=(2*NC+1)*(3*PY-6)-2.

          17:  EPS -- DOUBLE PRECISION                                Input
               On entry: a threshold (epsilon) for determining the
               effective rank of the system of linear equations. The rank
               is determined as the number of elements of the array DL (see
               below) which are non-zero. An element of DL is regarded as
               zero if it is less than (epsilon). Machine precision is a
               suitable value for (epsilon) in most practical applications
               which have only 2 or 3 decimals accurate in data. If some
               coefficients of the fit prove to be very large compared with
               the data ordinates, this suggests that (epsilon) should be
               increased so as to decrease the rank. The array DL will give
               a guide to appropriate values of (epsilon) to achieve this,
               as well as to the choice of (epsilon) in other cases where
               some experimentation may be needed to determine a value
               which leads to a satisfactory fit.

          18:  SIGMA -- DOUBLE PRECISION                             Output
               On exit: (Sigma), the weighted sum of squares of residuals.
               This is not computed from the individual residuals but from
               the right-hand sides of the orthogonally-transformed linear
               equations. For further details see Hayes and Halliday [4]
               page 97. The two methods of computation are theoretically
               equivalent, but the results may differ because of rounding
               error.

          19:  RANK -- INTEGER                                       Output
               On exit: the rank of the system as determined by the value
               of the threshold (epsilon). When RANK = NC, the least-
               squares solution is unique: in other cases the minimal
               least-squares solution is computed.

          20:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               At least one set of knots is not in non-decreasing order, or
               an interior knot is outside the range of the data values.

          IFAIL= 2
               More than four knots coincide at a single point, possibly
               because all data points have the same value of x (or y) or
               because an interior knot coincides with an extreme data
               value.

          IFAIL= 3
               Array POINT does not indicate the data points in panel
               order. Call E02ZAF to obtain a correct array.

          IFAIL= 4
               On entry M <= 1,

               or       PX < 8,

               or       PY < 8,

               or       NC /= (PX-4)*(PY-4),

               or       NWS is too small,

               or       NPOINT is too small.

          IFAIL= 5
               All the weights w  are zero or rank determined as zero.
                                r

          7. Accuracy

          The computation of the B-splines and reduction of the observation
          matrix to triangular form are both numerically stable.

          8. Further Comments

          The time taken by this routine is approximately proportional to
                                                           2
          the number of data points, m, and to (3*(PY-4)+4) .

          The B-spline representation of the bicubic spline is

                                       --
                               s(x,y)= > c  M (x)N (y)
                                       -- ij i    j
                                       ij

          summed over i=1,2,...,PX-4 and over j=1,2,...,PY-4. Here M (x)
                                                                    i
          and N (y) denote normalised cubic B-splines,the former defined on
               j
          the knots (lambda) ,(lambda)   ,...,(lambda)    and the latter on
                            i         i+1             i+4
          the knots (mu) ,(mu)   ,...,(mu)   . For further details, see
                        j     j+1         j+4
          Hayes and Halliday [4] for bicubic splines and de Boor [2] for
          normalised B-splines.

          The choice of the interior knots, which help to determine the
          spline's shape, must largely be a matter of trial and error. It
          is usually best to start with a small number of knots and,
          examining the fit at each stage, add a few knots at a time at
          places where the fit is particularly poor. In intervals of x or y
          where the surface represented by the data changes rapidly, in
          function value or derivatives, more knots will be needed than
          elsewhere. In some cases guidance can be obtained by analogy with
          the case of coincident knots: for example, just as three
          coincident knots can produce a discontinuity in slope, three
          close knots can produce rapid change in slope. Of course, such
          rapid changes in behaviour must be adequately represented by the
          data points, as indeed must the behaviour of the surface
          generally, if a satisfactory fit is to be achieved. When there is
          no rapid change in behaviour, equally-spaced knots will often
          suffice.

          In all cases the fit should be examined graphically before it is
          accepted as satisfactory.

          The fit obtained is not defined outside the rectangle

                  (lambda) <=x<=(lambda)    ,   (mu) <=y<=(mu)
                          4             PX-3        4         PY-3

          The reason for taking the extreme data values of x and y for
          these four knots is that, as is usual in data fitting, the fit
          cannot be expected to give satisfactory values outside the data
          region. If, nevertheless, the user requires values over a larger
          rectangle, this can be achieved by augmenting the data with two
          artificial data points (a,c,0) and (b,d,0) with zero weight,
          where a<=x<=b, c<=y<=d defines the enlarged rectangle. In the
          case when the data are adequate to make the least-squares
          solution unique (RANK = NC), this enlargement will not affect the
          fit over the original rectangle, except for possibly enlarged
          rounding errors, and will simply continue the bicubic polynomials
          in the panels bordering the rectangle out to the new boundaries:
          in other cases the fit will be affected. Even using the original
          rectangle there may be regions within it, particularly at its
          corners, which lie outside the data region and where, therefore,
          the fit will be unreliable. For example, if there is no data
          point in panel 1 of the diagram in Section 5, the least-squares
          criterion leaves the spline indeterminate in this panel: the
          minimal spline determined by the subroutine in this case passes
          through the value zero at the point ((lambda) ,(mu) ).
                                                       4     4

          9. Example

          This example program reads a value for (epsilon), and a set of
          data points, weights and knot positions. If there are more y
          knots than x knots, it interchanges the x and y axes. It calls
          E02ZAF to sort the data points into panel order, E02DAF to fit a
          bicubic spline to them, and E02DEF to evaluate the spline at the
          data points.

          Finally it prints:

               the weighted sum of squares of residuals computed from the
               linear equations;

               the rank determined by E02DAF;

               data points, fitted values and residuals in panel order;

               the weighted sum of squares of the residuals;

               the coefficients of the spline fit.

          The program is written to handle any number of data sets.

          Note: the data supplied in this example is not typical of a
          realistic problem: the number of data points would normally be
          much larger (in which case the array dimensions and the value of
          NWS in the program would have to be increased); and the value of
          (epsilon) would normally be much smaller on most machines (see
                                                     -6
          Section 5; the relatively large value of 10   has been chosen in
          order to illustrate a minimal least-squares solution when RANK <
          NC; in this example NC = 24).

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02dcf}{NAG On-line Documentation: e02dcf}
\beginscroll
\begin{verbatim}



     E02DCF(3NAG)      Foundation Library (12/10/92)      E02DCF(3NAG)



          E02 -- Curve and Surface Fitting                           E02DCF
                  E02DCF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02DCF computes a bicubic spline approximation to a set of data
          values, given on a rectangular grid in the x-y plane. The knots
          of the spline are located automatically, but a single parameter
          must be specified to control the trade-off between closeness of
          fit and smoothness of fit.

          2. Specification

                 SUBROUTINE E02DCF (START, MX, X, MY, Y, F, S, NXEST,
                1                   NYEST, NX, LAMDA, NY, MU, C, FP, WRK,
                2                   LWRK, IWRK, LIWRK, IFAIL)
                 INTEGER          MX, MY, NXEST, NYEST, NX, NY, LWRK, IWRK
                1                 (LIWRK), LIWRK, IFAIL
                 DOUBLE PRECISION X(MX), Y(MY), F(MX*MY), S, LAMDA(NXEST),
                1                 MU(NYEST), C((NXEST-4)*(NYEST-4)), FP, WRK
                2                 (LWRK)
                 CHARACTER*1      START

          3. Description

          This routine determines a smooth bicubic spline approximation
          s(x,y) to the set of data points (x ,y ,f   ), for q=1,2,...,m
                                             q  r  q,r                  x
          and r=1,2,...,m .
                         y

          The spline is given in the B-spline representation

                                n -4 n -4
                                 x    y
                                --   --
                        s(x,y)= >    >   c  M (x)N (y),                 (1)
                                --   --   ij i    j
                                i=1  j=1

          where M (x) and N (y) denote normalised cubic B-splines, the
                 i         j
          former defined on the knots (lambda)  to (lambda)    and the
                                              i            i+4
          latter on the knots (mu)  to (mu)   . For further details, see
                                  j        j+4
          Hayes and Halliday [4] for bicubic splines and de Boor [1] for
          normalised B-splines.

          The total numbers n  and n  of these knots and their values
                             x      y
          (lambda) ,...,(lambda)   and (mu) ,...,(mu)   are chosen
                  1             n          1         n
                                 x                    y
          automatically by the routine. The knots (lambda) ,...,
                                                          5
          (lambda)     and (mu) ,...,(mu)     are the interior knots; they
                  n -4         5         n -4
                   x                      y
          divide the approximation domain [x ,x  ]*[y ,y  ] into (
                                            1  m     1  m
                                               m        m
          n -7)*(n -7) subpanels [(lambda) ,(lambda)   ]*[(mu) ,(mu)   ],
           x      y                       i         i+1       j     j+1
          for i=4,5,...,n -4, j=4,5,...,n -4. Then, much as in the curve
                         x               y
          case (see E02BEF), the coefficients c   are determined as the
                                               ij
          solution of the following constrained minimization problem:

          minimize

                                     (eta),                             (2)

          subject to the constraint

                                 m   m
                                  x   y
                                 --  --          2
                        (theta)= >   >  (epsilon)   <=S,                (3)
                                 --  --          q,r
                                 q=1 r=1

          where  (eta) is a measure of the (lack of) smoothness of s(x,y).
                       Its value depends on the discontinuity jumps in
                       s(x,y) across the boundaries of the subpanels. It is
                       zero only when there are no discontinuities and is
                       positive otherwise, increasing with the size of the
                       jumps (see Dierckx [2] for details).

                 (epsilon)    denotes the residual f   -s(x ,y ),
                          q,r                       q,r    q  r

          and    S    is a non-negative number to be specified by the user.

          By means of the parameter S, 'the smoothing factor', the user
          will then control the balance between smoothness and closeness of
          fit, as measured by the sum of squares of residuals in (3). If S
          is too large, the spline will be too smooth and signal will be
          lost (underfit); if S is too small, the spline will pick up too
          much noise (overfit). In the extreme cases the routine will
          return an interpolating spline ((theta)=0) if S is set to zero,
          and the least-squares bicubic polynomial ((eta)=0) if S is set
          very large. Experimenting with S-values between these two
          extremes should result in a good compromise. (See Section 8.3 for
          advice on choice of S.)

          The method employed is outlined in Section 8.5 and fully
          described in Dierckx [2] and [3]. It involves an adaptive
          strategy for locating the knots of the bicubic spline (depending
          on the function underlying the data and on the value of S), and
          an iterative method for solving the constrained minimization
          problem once the knots have been determined.

          Values of the computed spline can subsequently be computed by
          calling E02DEF or E02DFF as described in Section 8.6.

          4. References

          [1]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [2]   Dierckx P (1982) A Fast Algorithm for Smoothing Data on a
                Rectangular Grid while using Spline Functions. SIAM J.
                Numer. Anal. 19 1286--1304.

          [3]   Dierckx P (1981) An Improved Algorithm for Curve Fitting
                with Spline Functions. Report TW54. Department of Computer
                Science, Katholieke Universiteit Leuven.

          [4]   Hayes J G and Halliday J (1974) The Least-squares Fitting of
                Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
                Appl. 14 89--103.

          [5]   Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
                10 177--183.

          5. Parameters

           1:  START -- CHARACTER*1                                   Input
               On entry: START must be set to 'C' or 'W'.

               If START = 'C' (Cold start), the routine will build up the
               knot set starting with no interior knots. No values need be
               assigned to the parameters NX, NY, LAMDA, MU, WRK or IWRK.

               If START = 'W' (Warm start), the routine will restart the
               knot-placing strategy using the knots found in a previous
               call of the routine. In this case, the parameters NX, NY,
               LAMDA, MU, WRK and IWRK must be unchanged from that previous
               call. This warm start can save much time in searching for a
               satisfactory value of S. Constraint: START = 'C' or 'W'.

           2:  MX -- INTEGER                                          Input
               On entry: m , the number of grid points along the x axis.
                          x
               Constraint: MX >= 4.

           3:  X(MX) -- DOUBLE PRECISION array                        Input
               On entry: X(q) must be set to x , the x co-ordinate of the
                                              q
               qth grid point along the x axis, for q=1,2,...,m .
                                                               x
               Constraint: x <x <...<x  .
                            1  2      m
                                       x

           4:  MY -- INTEGER                                          Input
               On entry: m , the number of grid points along the y axis.
                          y
               Constraint: MY >= 4.

           5:  Y(MY) -- DOUBLE PRECISION array                        Input
               On entry: Y(r) must be set to y , the y co-ordinate of the
                                              r
               rth grid point along the y axis, for r=1,2,...,m .
                                                               y
               Constraint: y <y <...<y  .
                            1  2      m
                                       y

           6:  F(MX*MY) -- DOUBLE PRECISION array                     Input
               On entry: F(m *(q-1)+r) must contain the data value f   ,
                            y                                       q,r
               for q=1,2,...,m  and r=1,2,...,m .
                              x                y

           7:  S -- DOUBLE PRECISION                                  Input
               On entry: the smoothing factor, S.

               If S=0.0, the routine returns an interpolating spline.

               If S is smaller than machine precision, it is assumed equal
               to zero.

               For advice on the choice of S, see Section 3 and Section 8.3
               Constraint: S >= 0.0.

           8:  NXEST -- INTEGER                                       Input

           9:  NYEST -- INTEGER                                       Input
               On entry: an upper bound for the number of knots n  and n
                                                                  x      y
               required in the x- and y-directions respectively.

               In most practical situations, NXEST =m /2 and NYEST m /2 is
                                                     x              y
               sufficient. NXEST and NYEST never need to be larger than
               m +4 and m +4 respectively, the numbers of knots needed for
                x        y
               interpolation (S=0.0). See also Section 8.4. Constraint:
               NXEST >= 8 and NYEST >= 8.

          10:  NX -- INTEGER                                   Input/Output
               On entry: if the warm start option is used, the value of NX
               must be left unchanged from the previous call. On exit: the
               total number of knots, n , of the computed spline with
                                       x
               respect to the x variable.

          11:  LAMDA(NXEST) -- DOUBLE PRECISION array          Input/Output
               On entry: if the warm start option is used, the values
               LAMDA(1), LAMDA(2),...,LAMDA(NX) must be left unchanged from
               the previous call. On exit: LAMDA contains the complete set
               of knots (lambda)  associated with the x variable, i.e., the
                                i
               interior knots LAMDA(5), LAMDA(6), ..., LAMDA(NX-4) as well
               as the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) =
               LAMDA(4) = X(1) and LAMDA(NX-3) = LAMDA(NX-2) = LAMDA(NX-1)
               = LAMDA(NX) = X(MX) needed for the B-spline representation.

          12:  NY -- INTEGER                                   Input/Output
               On entry: if the warm start option is used, the value of NY
               must be left unchanged from the previous call. On exit: the
               total number of knots, n , of the computed spline with
                                       y
               respect to the y variable.

          13:  MU(NYEST) -- DOUBLE PRECISION array             Input/Output
               On entry: if the warm start option is used, the values MU
               (1), MU(2),...,MU(NY) must be left unchanged from the
               previous call. On exit: MU contains the complete set of
               knots (mu)  associated with the y variable, i.e., the
                         i
               interior knots MU(5), MU(6),...,MU(NY-4) as well as the
               additional knots MU(1) = MU(2) = MU(3) = MU(4) = Y(1) and MU
               (NY-3) = MU(NY-2) = MU(NY-1) = MU(NY) = Y(MY) needed for the
               B-spline representation.

          14:  C((NXEST-4)*(NYEST-4)) -- DOUBLE PRECISION array      Output
               On exit: the coefficients of the spline approximation. C(
               (n -4)*(i-1)+j) is the coefficient c   defined in Section 3.
                 y                                 ij

          15:  FP -- DOUBLE PRECISION                                Output
               On exit: the sum of squared residuals, (theta), of the
               computed spline approximation. If FP = 0.0, this is an
               interpolating spline. FP should equal S within a relative
               tolerance of 0.001 unless NX = NY = 8, when the spline has
               no interior knots and so is simply a bicubic polynomial. For
               knots to be inserted, S must be set to a value below the
               value of FP produced in this case.

          16:  WRK(LWRK) -- DOUBLE PRECISION array                Workspace
               On entry: if the warm start option is used, the values WRK
               (1),...,WRK(4) must be left unchanged from the previous
               call.

               This array is used as workspace.

          17:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E02DCF is called.
               Constraint:
                    LWRK>=4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY

                          +max(MY,NXEST)+54.

          18:  IWRK(LIWRK) -- INTEGER array                       Workspace
               On entry: if the warm start option is used, the values IWRK
               (1), ..., IWRK(3) must be left unchanged from the previous
               call.

               This array is used as workspace.

          19:  LIWRK -- INTEGER                                       Input
               On entry:
               the dimension of the array IWRK as declared in the
               (sub)program from which E02DCF is called.
               Constraint: LIWRK >= 3 + MX + MY + NXEST + NYEST.

          20:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry START /= 'C' or 'W',

               or       MX < 4,

               or       MY < 4,

               or       S < 0.0,

               or       S = 0.0 and NXEST < MX + 4,

               or       S = 0.0 and NYEST < MY + 4,

               or       NXEST < 8,

               or       NYEST < 8,

               or       LWRK < 4*(MX+MY)+11*(NXEST+NYEST)+NXEST*MY+
                               +max(MY,NXEST)+54

               or       LIWRK < 3 + MX + MY + NXEST + NYEST.

          IFAIL= 2
               The values of X(q), for q = 1,2,...,MX, are not in strictly
               increasing order.

          IFAIL= 3
               The values of Y(r), for r = 1,2,...,MY, are not in strictly
               increasing order.

          IFAIL= 4
               The number of knots required is greater than allowed by
               NXEST and NYEST. Try increasing NXEST and/or NYEST and, if
               necessary, supplying larger arrays for the parameters LAMDA,
               MU, C, WRK and IWRK. However, if NXEST and NYEST are already
               large, say NXEST > MX/2 and NYEST > MY/2, then this error
               exit may indicate that S is too small.

          IFAIL= 5
               The iterative process used to compute the coefficients of
               the approximating spline has failed to converge. This error
               exit may occur if S has been set very small. If the error
               persists with increased S, consult NAG.

          If IFAIL = 4 or 5, a spline approximation is returned, but it
          fails to satisfy the fitting criterion (see (2) and (3) in
          Section 3) -- perhaps by only a small amount, however.

          7. Accuracy

          On successful exit, the approximation returned is such that its
          sum of squared residuals FP is equal to the smoothing factor S,
          up to a specified relative tolerance of 0.001 - except that if
          n =8 and n =8, FP may be significantly less than S: in this case
           x        y
          the computed spline is simply the least-squares bicubic
          polynomial approximation of degree 3, i.e., a spline with no
          interior knots.

          8. Further Comments

          8.1. Timing

          The time taken for a call of E02DCF depends on the complexity of
          the shape of the data, the value of the smoothing factor S, and
          the number of data points. If E02DCF is to be called for
          different values of S, much time can be saved by setting START =

          8.2. Weighting of Data Points

          E02DCF does not allow individual weighting of the data values. If
          these were determined to widely differing accuracies, it may be
          better to use E02DDF. The computation time would be very much
          longer, however.

          8.3. Choice of S

          If the standard deviation of f    is the same for all q and r
                                        q,r
          (the case for which this routine is designed - see Section 8.2.)
          and known to be equal, at least approximately, to (sigma), say,
          then following Reinsch [5] and choosing the smoothing factor S in
                           2      
          the range (sigma) (m+-\/2m), where m=m m , is likely to give a
                                                x y
          good start in the search for a satisfactory value. If the
          standard deviations vary, the sum of their squares over all the
          data points could be used. Otherwise experimenting with different
          values of S will be required from the start, taking account of
          the remarks in Section 3.

          In that case, in view of computation time and memory
          requirements, it is recommended to start with a very large value
          for S and so determine the least-squares bicubic polynomial; the
          value returned for FP, call it FP , gives an upper bound for S.
                                           0
          Then progressively decrease the value of S to obtain closer fits
          - say by a factor of 10 in the beginning, i.e., S=FP /10,
                                                              0
          S=FP /100, and so on, and more carefully as the approximation
              0
          shows more details.

          The number of knots of the spline returned, and their location,
          generally depend on the value of S and on the behaviour of the
          function underlying the data. However, if E02DCF is called with
          START = 'W', the knots returned may also depend on the smoothing
          factors of the previous calls. Therefore if, after a number of
          trials with different values of S and START = 'W', a fit can
          finally be accepted as satisfactory, it may be worthwhile to call
          E02DCF once more with the selected value for S but now using
          START = 'C'. Often, E02DCF then returns an approximation with the
          same quality of fit but with fewer knots, which is therefore
          better if data reduction is also important.

          8.4. Choice of NXEST and NYEST

          The number of knots may also depend on the upper bounds NXEST and
          NYEST. Indeed, if at a certain stage in E02DCF the number of
          knots in one direction (say n ) has reached the value of its
                                       x
          upper bound (NXEST), then from that moment on all subsequent
          knots are added in the other (y) direction. Therefore the user
          has the option of limiting the number of knots the routine
          locates in any direction. For example, by setting NXEST = 8 (the
          lowest allowable value for NXEST), the user can indicate that he
          wants an approximation which is a simple cubic polynomial in the
          variable x.

          8.5. Outline of Method Used

          If S=0, the requisite number of knots is known in advance, i.e.,
          n =m +4 and n =m +4; the interior knots are located immediately
           x  x        y  y
          as (lambda)  = x    and (mu)  = y   , for i=5,6,...,n -4 and
                     i    i-2         j    j-2                 x
          j=5,6,...,n -4. The corresponding least-squares spline is then an
                     y
          interpolating spline and therefore a solution of the problem.

          If S>0, suitable knot sets are built up in stages (starting with
          no interior knots in the case of a cold start but with the knot
          set found in a previous call if a warm start is chosen). At each
          stage, a bicubic spline is fitted to the data by least-squares,
          and (theta), the sum of squares of residuals, is computed. If
          (theta)>S, new knots are added to one knot set or the other so as
          to reduce (theta) at the next stage. The new knots are located in
          intervals where the fit is particularly poor, their number
          depending on the value of S and on the progress made so far in
          reducing (theta). Sooner or later, we find that (theta)<=S and at
          that point the knot sets are accepted. The routine then goes on
          to compute the (unique) spline which has these knot sets and
          which satisfies the full fitting criterion specified by (2) and
          (3). The theoretical solution has (theta)=S. The routine computes
          the spline by an iterative scheme which is ended when (theta)=S
          within a relative tolerance of 0.001. The main part of each
          iteration consists of a linear least-squares computation of
          special form, done in a similarly stable and efficient manner as
          in E02BAF for least-squares curve fitting.

          An exception occurs when the routine finds at the start that,
          even with no interior knots (n =n =8), the least-squares spline
                                        x  y
          already has its sum of residuals <=S. In this case, since this
          spline (which is simply a bicubic polynomial) also has an optimal
          value for the smoothness measure (eta), namely zero, it is
          returned at once as the (trivial) solution. It will usually mean
          that S has been chosen too large.

          For further details of the algorithm and its use see Dierckx [2].

          8.6. Evaluation of Computed Spline

          The values of the computed spline at the points (TX(r),TY(r)),
          for r = 1,2,...,N, may be obtained in the double precision array
          FF, of length at least N, by the following code:

                 IFAIL = 0
                 CALL E02DEF(N,NX,NY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)

          where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
          , WRK is a double precision workspace array of length at least
          NY-4, and IWRK is an integer workspace array of length at least
          NY-4.

          To evaluate the computed spline on a KX by KY rectangular grid of
          points in the x-y plane, which is defined by the x co-ordinates
          stored in TX(q), for q=1,2,...,KX, and the y co-ordinates stored
          in TY(r), for r=1,2,...,KY, returning the results in the double
          precision array FG which is of length at least KX*KY, the
          following call may be used:

                IFAIL = 0
                CALL E02DFF(KX,KY,NX,NY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
               *            IWRK,LIWRK,IFAIL)

          where NX, NY, LAMDA, MU and C are the output parameters of E02DCF
          , WRK is a double precision workspace array of length at least
          LWRK = min(NWRK1,NWRK2), NWRK1 = KX*4+NX, NWRK2 = KY*4+NY, and
          IWRK is an integer workspace array of length at least LIWRK = KY
          + NY - 4 if NWRK1 >= NWRK2, or KX + NX - 4 otherwise. The result
          of the spline evaluated at grid point (q,r) is returned in
          element (KY*(q-1)+r) of the array FG.

          9. Example

          This example program reads in values of MX, MY, x , for q = 1,2,.
                                                           q
                      r
          ordinates f    defined at the grid points (x ,y ). It then calls
                     q,r                              q  r
          E02DCF to compute a bicubic spline approximation for one
          specified value of S, and prints the values of the computed knots
          and B-spline coefficients. Finally it evaluates the spline at a
          small sample of points on a rectangular grid.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02ddf}{NAG On-line Documentation: e02ddf}
\beginscroll
\begin{verbatim}



     E02DDF(3NAG)      Foundation Library (12/10/92)      E02DDF(3NAG)



          E02 -- Curve and Surface Fitting                           E02DDF
                  E02DDF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02DDF computes a bicubic spline approximation to a set of
          scattered data. The knots of the spline are located
          automatically, but a single parameter must be specified to
          control the trade-off between closeness of fit and smoothness of
          fit.

          2. Specification

                 SUBROUTINE E02DDF (START, M, X, Y, F, W, S, NXEST, NYEST,
                1                   NX, LAMDA, NY, MU, C, FP, RANK, WRK,
                2                   LWRK, IWRK, LIWRK, IFAIL)
                 INTEGER          M, NXEST, NYEST, NX, NY, RANK, LWRK, IWRK
                1                 (LIWRK), LIWRK, IFAIL
                 DOUBLE PRECISION X(M), Y(M), F(M), W(M), S, LAMDA(NXEST),
                1                 MU(NYEST), C((NXEST-4)*(NYEST-4)), FP, WRK
                2                 (LWRK)
                 CHARACTER*1      START

          3. Description

          This routine determines a smooth bicubic spline approximation
          s(x,y) to the set of data points (x ,y ,f ) with weights w , for
                                             r  r  r                r
          r=1,2,...,m.

          The approximation domain is considered to be the rectangle
          [x   ,x   ]*[y   ,y   ], where x    (y   ) and x    (y   ) denote
            min  max    min  max          min   min       max   max
          the lowest and highest data values of x (y).

          The spline is given in the B-spline representation

                                n -4 n -4
                                 x    y
                                --   --
                        s(x,y)= >    >   c  M (x)N (y),                 (1)
                                --   --   ij i    j
                                i=1  j=1

          where M (x) and N (y) denote normalised cubic B-splines, the
                 i         j
          former defined on the knots (lambda)  to (lambda)    and the
                                              i            i+4
          latter on the knots (mu)  to (mu)   . For further details, see
                                  j        j+4
          Hayes and Halliday [4] for bicubic splines and de Boor [1] for
          normalised B-splines.

          The total numbers n  and n  of these knots and their values
                             x      y
          (lambda) ,...,(lambda)   and (mu) ,...,(mu)   are chosen
                  1             n          1         n
                                 x                    y
          automatically by the routine. The knots (lambda) ,...,
                                                          5
          (lambda)     and (mu) ,..., (mu)     are the interior knots; they
                  n -4         5          n -4
                   x                       y
          divide the approximation domain [x   ,x   ]*[y   ,y   ] into (
                                            min  max    min  max
          n -7)*(n -7) subpanels [(lambda) ,(lambda)   ]*[(mu) ,(mu)   ],
           x      y                       i         i+1       j     j+1
          for i=4,5,...,n -4; j=4,5,...,n -4. Then, much as in the curve
                         x               y
          case (see E02BEF), the coefficients c   are determined as the
                                               ij
          solution of the following constrained minimization problem:

          minimize

                                     (eta),                             (2)

          subject to the constraint

                                    m
                                    --          2
                           (theta)= >  (epsilon) <=S                    (3)
                                    --          r
                                    r=1

          where: (eta) is a measure of the (lack of) smoothness of s(x,y).
                       Its value depends on the discontinuity jumps in
                       s(x,y) across the boundaries of the subpanels. It is
                       zero only when there are no discontinuities and is
                       positive otherwise, increasing with the size of the
                       jumps (see Dierckx [2] for details).

                 (epsilon)  denotes the weighted residual w (f -s(x ,y )),
                          r                                r  r    r  r

          and    S     is a non-negative number to be specified by the user.

          By means of the parameter S, 'the smoothing factor', the user
          will then control the balance between smoothness and closeness of
          fit, as measured by the sum of squares of residuals in (3). If S
          is too large, the spline will be too smooth and signal will be
          lost (underfit); if S is too small, the spline will pick up too
          much noise (overfit). In the extreme cases the method would
          return an interpolating spline ((theta)=0) if S were set to zero,
          and returns the least-squares bicubic polynomial ((eta)=0) if S
          is set very large. Experimenting with S-values between these two
          extremes should result in a good compromise. (See Section 8.2 for
          advice on choice of S.) Note however, that this routine, unlike
          E02BEF and E02DCF, does not allow S to be set exactly to zero: to
          compute an interpolant to scattered data, E01SAF or E01SEF should
          be used.

          The method employed is outlined in Section 8.5 and fully
          described in Dierckx [2] and [3]. It involves an adaptive
          strategy for locating the knots of the bicubic spline (depending
          on the function underlying the data and on the value of S), and
          an iterative method for solving the constrained minimization
          problem once the knots have been determined.

          Values of the computed spline can subsequently be computed by
          calling E02DEF or E02DFF as described in Section 8.6.

          4. References

          [1]   De Boor C (1972) On Calculating with B-splines. J. Approx.
                Theory. 6 50--62.

          [2]   Dierckx P (1981) An Algorithm for Surface Fitting with
                Spline Functions. IMA J. Num. Anal. 1 267--283.

          [3]   Dierckx P (1981) An Improved Algorithm for Curve Fitting
                with Spline Functions. Report TW54. Department of Computer
                Science, Katholieke Universiteit Leuven.

          [4]   Hayes J G and Halliday J (1974) The Least-squares Fitting of
                Cubic Spline Surfaces to General Data Sets. J. Inst. Math.
                Appl. 14 89--103.

          [5]   Peters G and Wilkinson J H (1970) The Least-squares Problem
                and Pseudo-inverses. Comput. J. 13 309--316.

          [6]   Reinsch C H (1967) Smoothing by Spline Functions. Num. Math.
                10 177--183.

          5. Parameters

           1:  START -- CHARACTER*1                                   Input
               On entry: START must be set to 'C' or 'W'.

               If START = 'C' (Cold start), the routine will build up the
               knot set starting with no interior knots. No values need be
               assigned to the parameters NX, NY, LAMDA, MU or WRK.

               If START = 'W' (Warm start), the routine will restart the
               knot-placing strategy using the knots found in a previous
               call of the routine. In this case, the parameters NX, NY,
               LAMDA, MU and WRK must be unchanged from that previous call.
               This warm start can save much time in searching for a
               satisfactory value of S. Constraint: START = 'C' or 'W'.

           2:  M -- INTEGER                                           Input
               On entry: m, the number of data points.

               The number of data points with non-zero weight (see W below)
               must be at least 16.

           3:  X(M) -- DOUBLE PRECISION array                         Input

           4:  Y(M) -- DOUBLE PRECISION array                         Input

           5:  F(M) -- DOUBLE PRECISION array                         Input
               On entry: X(r), Y(r), F(r) must be set to the co-ordinates
               of (x ,y ,f ), the rth data point, for r=1,2,...,m. The
                    r  r  r
               order of the data points is immaterial.

           6:  W(M) -- DOUBLE PRECISION array                         Input
               On entry: W(r) must be set to w , the rth value in the set
                                              r
               of weights, for r=1,2,...,m. Zero weights are permitted and
               the corresponding points are ignored, except when
               determining x   , x   , y    and y    (see Section 8.4). For
                            min   max   min      max
               advice on the choice of weights, see Section 2.1.2 of the
               Chapter Introduction. Constraint: the number of data points
               with non-zero weight must be at least 16.

           7:  S -- DOUBLE PRECISION                                  Input
               On entry: the smoothing factor, S.

               For advice on the choice of S, see Section 3 and Section 8.2
               . Constraint: S > 0.0.

           8:  NXEST -- INTEGER                                       Input


           9:  NYEST -- INTEGER                                       Input
               On entry: an upper bound for the number of knots n  and n
                                                                 x      y
               required in the x- and y-directions respectively.
                                                                 ___
               In most practical situations, NXEST = NYEST = 4+\/m/2 is
               sufficient. See also Section 8.3. Constraint: NXEST >= 8 and
               NYEST >= 8.

          10:  NX -- INTEGER                                   Input/Output
               On entry: if the warm start option is used, the value of NX
               must be left unchanged from the previous call. On exit: the
               total number of knots, n , of the computed spline with
                                       x
               respect to the x variable.

          11:  LAMDA(NXEST) -- DOUBLE PRECISION array          Input/Output
               On entry: if the warm start option is used, the values LAMDA
               (1), LAMDA(2),...,LAMDA(NX) must be left unchanged from the
               previous call. On exit: LAMDA contains the complete set of
               knots (lambda)  associated with the x variable, i.e., the
                             i
               interior knots LAMDA(5), LAMDA(6),...,LAMDA(NX-4) as well as
               the additional knots LAMDA(1) = LAMDA(2) = LAMDA(3) = LAMDA
               (4) = x    and LAMDA(NX-3) = LAMDA(NX-2) = LAMDA(NX-1) =
                      min
               LAMDA(NX) = x    needed for the B-spline representation
                            max
               (where x    and x    are as described in Section 3).
                       min      max

          12:  NY -- INTEGER                                   Input/Output
               On entry: if the warm start option is used, the value of NY
               must be left unchanged from the previous call. On exit: the
               total number of knots, n , of the computed spline with
                                       y
               respect to the y variable.

          13:  MU(NYEST) -- DOUBLE PRECISION array             Input/Output
               On entry: if the warm start option is used, the values MU(1)
               MU(2),...,MU(NY) must be left unchanged from the previous
               call. On exit: MU contains the complete set of knots (mu)
                                                                        i
               associated with the y variable, i.e., the interior knots MU
               (5), MU(6),...,MU(NY-4) as well as the additional knots MU
               (1) = MU(2) = MU(3) = MU(4) = y    and MU(NY-3) = MU(NY-2) =
                                              min
               MU(NY-1) = MU(NY) = y    needed for the B-spline
                                    max
               representation (where y    and y    are as described in
                                      min      max
               Section 3).

          14:  C((NXEST-4)*(NYEST-4)) -- DOUBLE PRECISION array      Output
               On exit: the coefficients of the spline approximation. C(
               (n -4)*(i-1)+j) is the coefficient c   defined in Section 3.
                 y                                 ij

          15:  FP -- DOUBLE PRECISION                                Output
               On exit: the weighted sum of squared residuals, (theta), of
               the computed spline approximation. FP should equal S within
               a relative tolerance of 0.001 unless NX = NY = 8, when the
               spline has no interior knots and so is simply a bicubic
               polynomial. For knots to be inserted, S must be set to a
               value below the value of FP produced in this case.

          16:  RANK -- INTEGER                                       Output
               On exit: RANK gives the rank of the system of equations used
               to compute the final spline (as determined by a suitable
               machine-dependent threshold). When RANK = (NX-4)*(NY-4), the
               solution is unique; otherwise the system is rank-deficient
               and the minimum-norm solution is computed. The latter case
               may be caused by too small a value of S.

          17:  WRK(LWRK) -- DOUBLE PRECISION array                Workspace
               On entry: if the warm start option is used, the value of WRK
               (1) must be left unchanged from the previous call.

               This array is used as workspace.

          18:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E02DDF is called.
               Constraint: LWRK >= (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,

               where

               u=NXEST-4, v=NYEST-4, and w=max(u,v).

               For some problems, the routine may need to compute the
               minimal least-squares solution of a rank-deficient system of
               linear equations (see Section 3). The amount of workspace
               required to solve such problems will be larger than
               specified by the value given above, which must be increased
               by an amount, LWRK2 say. An upper bound for LWRK2 is given
               by 4*u*v*w+2*u*v+4*w, where u, v and w are as above.
               However, if there are enough data points, scattered
               uniformly over the approximation domain, and if the
               smoothing factor S is not too small, there is a good chance
               that this extra workspace is not needed. A lot of memory
               might therefore be saved by assuming LWRK2 = 0.

          19:  IWRK(LIWRK) -- INTEGER array                       Workspace


          20:  LIWRK -- INTEGER                                       Input
               On entry:
               the dimension of the array IWRK as declared in the
               (sub)program from which E02DDF is called.
               Constraint: LIWRK>=M+2*(NXEST-7)*(NYEST-7).

          21:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry START /= 'C' or 'W',

               or       the number of data points with non-zero weight <
                        16,

               or       S <= 0.0,

               or       NXEST < 8,

               or       NYEST < 8,

               or       LWRK < (7*u*v+25*w)*(w+1)+2*(u+v+4*M)+23*w+56,
                        where u = NXEST - 4, v = NYEST - 4 and w=max(u,v),

               or       LIWRK <M+2*(NXEST-7)*(NYEST-7).

          IFAIL= 2
               On entry either all the X(r), for r = 1,2,...,M, are equal,
               or all the Y(r), for r = 1,2,...,M, are equal.

          IFAIL= 3
               The number of knots required is greater than allowed by
               NXEST and NYEST. Try increasing NXEST and/or NYEST and, if
               necessary, supplying larger arrays for the parameters LAMDA,
               MU, C, WRK and IWRK. However, if NXEST and NYEST are already
                                               

               large, say NXEST, NYEST > 4 + \/M/2, then this error exit
               may indicate that S is too small.

          IFAIL= 4
               No more knots can be added because the number of B-spline
               coefficients (NX-4)*(NY-4) already exceeds the number of
               data points M. This error exit may occur if either of S or M
               is too small.

          IFAIL= 5
               No more knots can be added because the additional knot would
               (quasi) coincide with an old one. This error exit may occur
               if too large a weight has been given to an inaccurate data
               point, or if S is too small.

          IFAIL= 6
               The iterative process used to compute the coefficients of
               the approximating spline has failed to converge. This error
               exit may occur if S has been set very small. If the error
               persists with increased S, consult NAG.

          IFAIL= 7
               LWRK is too small; the routine needs to compute the minimal
               least-squares solution of a rank-deficient system of linear
               equations, but there is not enough workspace. There is no
               approximation returned but, having saved the information
               contained in NX, LAMDA, NY, MU and WRK, and having adjusted
               the value of LWRK and the dimension of array WRK
               accordingly, the user can continue at the point the program
               was left by calling E02DDF with START = 'W'. Note that the
               requested value for LWRK is only large enough for the
               current phase of the algorithm. If the routine is restarted
               with LWRK set to the minimum value requested, a larger
               request may be made at a later stage of the computation. See
               Section 5 for the upper bound on LWRK. On soft failure, the
               minimum requested value for LWRK is returned in IWRK(1) and
               the safe value for LWRK is returned in IWRK(2).

          If IFAIL = 3,4,5 or 6, a spline approximation is returned, but it
          fails to satisfy the fitting criterion (see (2) and (3) in
          Section 3 -- perhaps only by a small amount, however.

          7. Accuracy

          On successful exit, the approximation returned is such that its
          weighted sum of squared residuals FP is equal to the smoothing
          factor S, up to a specified relative tolerance of 0.001 - except
          that if n =8 and n =8, FP may be significantly less than S: in
                   x        y
          this case the computed spline is simply the least-squares bicubic
          polynomial approximation of degree 3, i.e., a spline with no
          interior knots.

          8. Further Comments

          8.1. Timing

          The time taken for a call of E02DDF depends on the complexity of
          the shape of the data, the value of the smoothing factor S, and
          the number of data points. If E02DDF is to be called for
          different values of S, much time can be saved by setting START =
          It should be noted that choosing S very small considerably
          increases computation time.

          8.2. Choice of S

          If the weights have been correctly chosen (see Section 2.1.2 of
          the Chapter Introduction), the standard deviation of w f  would
                                                                r r
          be the same for all r, equal to (sigma), say. In this case,
                                                              2      
          choosing the smoothing factor S in the range (sigma) (m+-\/2m),
          as suggested by Reinsch [6], is likely to give a good start in
          the search for a satisfactory value. Otherwise, experimenting
          with different values of S will be required from the start.

          In that case, in view of computation time and memory
          requirements, it is recommended to start with a very large value
          for S and so determine the least-squares bicubic polynomial; the
          value returned for FP, call it FP , gives an upper bound for S.
                                           0
          Then progressively decrease the value of S to obtain closer fits
          - say by a factor of 10 in the beginning, i.e., S=FP /10,
                                                              0
          S=FP /100, and so on, and more carefully as the approximation
              0
          shows more details.

          To choose S very small is strongly discouraged. This considerably
          increases computation time and memory requirements. It may also
          cause rank-deficiency (as indicated by the parameter RANK) and
          endanger numerical stability.

          The number of knots of the spline returned, and their location,
          generally depend on the value of S and on the behaviour of the
          function underlying the data. However, if E02DDF is called with
          START = 'W', the knots returned may also depend on the smoothing
          factors of the previous calls. Therefore if, after a number of
          trials with different values of S and START = 'W', a fit can
          finally be accepted as satisfactory, it may be worthwhile to call
          E02DDF once more with the selected value for S but now using
          START = 'C'. Often, E02DDF then returns an approximation with the
          same quality of fit but with fewer knots, which is therefore
          better if data reduction is also important.

          8.3. Choice of NXEST and NYEST

          The number of knots may also depend on the upper bounds NXEST and
          NYEST. Indeed, if at a certain stage in E02DDF the number of
          knots in one direction (say n ) has reached the value of its
                                       x
          upper bound (NXEST), then from that moment on all subsequent
          knots are added in the other (y) direction. This may indicate
          that the value of NXEST is too small. On the other hand, it gives
          the user the option of limiting the number of knots the routine
          locates in any direction. For example, by setting NXEST = 8 (the
          lowest allowable value for NXEST), the user can indicate that he
          wants an approximation which is a simple cubic polynomial in the
          variable x.

          8.4. Restriction of the approximation domain

          The fit obtained is not defined outside the rectangle
          [(lambda) ,(lambda)    ]*[(mu) ,(mu)    ]. The reason for taking
                   4         n -3       4     n -3
                              x                y
          the extreme data values of x and y for these four knots is that,
          as is usual in data fitting, the fit cannot be expected to give
          satisfactory values outside the data region. If, nevertheless,
          the user requires values over a larger rectangle, this can be
          achieved by augmenting the data with two artificial data points
          (a,c,0) and (b,d,0) with zero weight, where [a,b]*[c,d] denotes
          the enlarged rectangle.

          8.5. Outline of method used

          First suitable knot sets are built up in stages (starting with no
          interior knots in the case of a cold start but with the knot set
          found in a previous call if a warm start is chosen). At each
          stage, a bicubic spline is fitted to the data by least-squares
          and (theta), the sum of squares of residuals, is computed. If
          (theta)>S, a new knot is added to one knot set or the other so as
          to reduce (theta) at the next stage. The new knot is located in
          an interval where the fit is particularly poor. Sooner or later,
          we find that (theta)<=S and at that point the knot sets are
          accepted. The routine then goes on to compute a spline which has
          these knot sets and which satisfies the full fitting criterion
          specified by (2) and (3). The theoretical solution has (theta)=S.
          The routine computes the spline by an iterative scheme which is
          ended when (theta)=S within a relative tolerance of 0.001. The
          main part of each iteration consists of a linear least-squares
          computation of special form, done in a similarly stable and
          efficient manner as in E02DAF. As there also, the minimal least-
          squares solution is computed wherever the linear system is found
          to be rank-deficient.

          An exception occurs when the routine finds at the start that,
          even with no interior knots (N = 8), the least-squares spline
          already has its sum of squares of residuals <=S. In this case,
          since this spline (which is simply a bicubic polynomial) also has
          an optimal value for the smoothness measure (eta), namely zero,
          it is returned at once as the (trivial) solution. It will usually
          mean that S has been chosen too large.

          For further details of the algorithm and its use see Dierckx [2].

          8.6. Evaluation of computed spline

          The values of the computed spline at the points (TX(r),TY(r)),
          for r = 1,2,...,N, may be obtained in the double precision array
          FF, of length at least N, by the following code:


                IFAIL = 0
                CALL E02DEF(N,NX,NY,TX,TY,LAMDA,MU,C,FF,WRK,IWRK,IFAIL)


          where NX, NY, LAMDA, MU and C are the output parameters of E02DDF
          , WRK is a double precision workspace array of length at least
          NY-4, and IWRK is an integer workspace array of length at least
          NY-4.

          To evaluate the computed spline on a KX by KY rectangular grid of
          points in the x-y plane, which is defined by the x co-ordinates
          stored in TX(q), for q=1,2,...,KX, and the y co-ordinates stored
          in TY(r), for r=1,2,...,KY, returning the results in the double
          precision array FG which is of length at least KX*KY, the
          following call may be used:


                IFAIL = 0
                CALL E02DFF(KX,KY,NX,NY,TX,TY,LAMDA,MU,C,FG,WRK,LWRK,
               *            IWRK,LIWRK,IFAIL)


          where NX, NY, LAMDA, MU and C are the output parameters of E02DDF
          , WRK is a double precision workspace array of length at least
          LWRK = min(NWRK1,NWRK2), NWRK1 = KX*4+NX, NWRK2 = KY*4+NY, and
          IWRK is an integer workspace array of length at least LIWRK = KY
          + NY - 4 if NWRK1 >= NWRK2, or KX + NX - 4 otherwise. The result
          of the spline evaluated at grid point (q,r) is returned in
          element (KY*(q-1)+r) of the array FG.

          9. Example

          This example program reads in a value of M, followed by a set of
          M data points (x ,y ,f ) and their weights w . It then calls
                          r  r  r                     r
          E02DDF to compute a bicubic spline approximation for one
          specified value of S, and prints the values of the computed knots
          and B-spline coefficients. Finally it evaluates the spline at a
          small sample of points on a rectangular grid.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02def}{NAG On-line Documentation: e02def}
\beginscroll
\begin{verbatim}



     E02DEF(3NAG)      Foundation Library (12/10/92)      E02DEF(3NAG)



          E02 -- Curve and Surface Fitting                           E02DEF
                  E02DEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02DEF calculates values of a bicubic spline from its B-spline
          representation.

          2. Specification

                 SUBROUTINE E02DEF (M, PX, PY, X, Y, LAMDA, MU, C, FF, WRK,
                1                   IWRK, IFAIL)
                 INTEGER          M, PX, PY, IWRK(PY-4), IFAIL
                 DOUBLE PRECISION X(M), Y(M), LAMDA(PX), MU(PY), C((PX-4)*
                1                 (PY-4)), FF(M), WRK(PY-4)

          3. Description

          This routine calculates values of the bicubic spline s(x,y) at
          prescribed points (x ,y ), for r=1,2,...,m, from its augmented
                              r  r
          knot sets {(lambda)} and {(mu)} and from the coefficients c  ,
                                                                     ij
          for i=1,2,...,PX-4; j=1,2,...,PY-4, in its B-spline
          representation

                                      --
                              s(x,y)= > c  M (x)N (y).
                                      -- ij i    j
                                      ij

          Here M (x) and N (y) denote normalised cubic B-splines, the
                i         j
          former defined on the knots (lambda)  to (lambda)    and the
                                              i            i+4
          latter on the knots (mu)  to (mu)   .
                                  j        j+4

          This routine may be used to calculate values of a bicubic spline
          given in the form produced by E01DAF, E02DAF, E02DCF and E02DDF.
          It is derived from the routine B2VRE in Anthony et al [1].

          4. References

          [1]   Anthony G T, Cox M G and Hayes J G (1982) DASL - Data
                Approximation Subroutine Library. National Physical
                Laboratory.

          [2]   Cox M G (1978) The Numerical Evaluation of a Spline from its
                B-spline Representation. J. Inst. Math. Appl. 21 135--143.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: m, the number of points at which values of the
               spline are required. Constraint: M >= 1.

           2:  PX -- INTEGER                                          Input

           3:  PY -- INTEGER                                          Input
               On entry: PX and PY must specify the total number of knots
               associated with the variables x and y respectively. They are
               such that PX-8 and PY-8 are the corresponding numbers of
               interior knots. Constraint: PX >= 8 and PY >= 8.

           4:  X(M) -- DOUBLE PRECISION array                         Input

           5:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: X and Y must contain x  and y , for r=1,2,...,m,
                                               r      r
               respectively. These are the co-ordinates of the points at
               which values of the spline are required. The order of the
               points is immaterial. Constraint: X and Y must satisfy

               LAMDA(4) <= X(r) <= LAMDA(PX-3)

               and

               MU(4) <= Y(r) <= MU(PY-3), for r=1,2,...,m.

               The spline representation is not valid outside these
               intervals.

           6:  LAMDA(PX) -- DOUBLE PRECISION array                    Input

           7:  MU(PY) -- DOUBLE PRECISION array                       Input
               On entry: LAMDA and MU must contain the complete sets of
               knots {(lambda)} and {(mu)} associated with the x and y
               variables respectively. Constraint: the knots in each set
               must be in non-decreasing order, with LAMDA(PX-3) > LAMDA(4)
               and MU(PY-3) > MU(4).

           8:  C((PX-4)*(PY-4)) -- DOUBLE PRECISION array             Input
               On entry: C((PY-4)*(i-1)+j) must contain the coefficient
               c   described in Section 3, for i=1,2,...,PX-4;
                ij
               j=1,2,...,PY-4.

           9:  FF(M) -- DOUBLE PRECISION array                       Output
               On exit: FF(r) contains the value of the spline at the
               point (x ,y ), for r=1,2,...,m.
                       r  r

          10:  WRK(PY-4) -- DOUBLE PRECISION array                Workspace

          11:  IWRK(PY-4) -- INTEGER array                        Workspace

          12:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry M < 1,

               or       PY < 8,

               or       PX < 8.

          IFAIL= 2
               On entry the knots in array LAMDA, or those in array MU, are
               not in non-decreasing order, or LAMDA(PX-3) <= LAMDA(4), or
               MU(PY-3) <= MU(4).

          IFAIL= 3
               On entry at least one of the prescribed points (x ,y ) lies
                                                                r  r
               outside the rectangle defined by LAMDA(4), LAMDA(PX-3) and
               MU(4), MU(PY-3).

          7. Accuracy

          The method used to evaluate the B-splines is numerically stable,
          in the sense that each computed value of s(x ,y ) can be regarded
                                                      r  r
          as the value that would have been obtained in exact arithmetic
          from slightly perturbed B-spline coefficients. See Cox [2] for
          details.

          8. Further Comments

          Computation time is approximately proportional to the number of
          points, m, at which the evaluation is required.

          9. Example

          This program reads in knot sets LAMDA(1),..., LAMDA(PX) and MU(1)
          ,..., MU(PY), and a set of bicubic spline coefficients c  .
                                                                  ij
          Following these are a value for m and the co-ordinates (x ,y ),
                                                                   r  r
          for r=1,2,...,m, at which the spline is to be evaluated.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02dff}{NAG On-line Documentation: e02dff}
\beginscroll
\begin{verbatim}



     E02DFF(3NAG)      Foundation Library (12/10/92)      E02DFF(3NAG)



          E02 -- Curve and Surface Fitting                           E02DFF
                  E02DFF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02DFF calculates values of a bicubic spline from its B-spline
          representation. The spline is evaluated at all points on a
          rectangular grid.

          2. Specification

                 SUBROUTINE E02DFF (MX, MY, PX, PY, X, Y, LAMDA, MU, C, FF,
                1                   WRK, LWRK, IWRK, LIWRK, IFAIL)
                 INTEGER          MX, MY, PX, PY, LWRK, IWRK(LIWRK), LIWRK,
                1                 IFAIL
                 DOUBLE PRECISION X(MX), Y(MY), LAMDA(PX), MU(PY), C((PX-4)*
                1                 (PY-4)), FF(MX*MY), WRK(LWRK)

          3. Description

          This routine calculates values of the bicubic spline s(x,y) on a
          rectangular grid of points in the x-y plane, from its augmented
          knot sets {(lambda)} and {(mu)} and from the coefficients c  ,
                                                                     ij
          for i=1,2,...,PX-4; j=1,2,...,PY-4, in its B-spline
          representation

                                      --
                              s(x,y)= > c  M (x)N (y).
                                      -- ij i    j
                                      ij

          Here M (x) and N (y) denote normalised cubic B-splines, the
                i         j
          former defined on the knots (lambda)  to (lambda)    and the
                                              i            i+4
          latter on the knots (mu)  to (mu)   .
                                  j        j+4

          The points in the grid are defined by co-ordinates x , for
                                                              q
          q=1,2,...,m , along the x axis, and co-ordinates y , for
                     x                                      r
          r=1,2,...,m  along the y axis.
                     y

          This routine may be used to calculate values of a bicubic spline
          given in the form produced by E01DAF, E02DAF, E02DCF and E02DDF.
          It is derived from the routine B2VRE in Anthony et al [1].

          4. References

          [1]   Anthony G T, Cox M G and Hayes J G (1982) DASL - Data
                Approximation Subroutine Library. National Physical
                Laboratory.

          [2]   Cox M G (1978) The Numerical Evaluation of a Spline from its
                B-spline Representation. J. Inst. Math. Appl. 21 135--143.

          5. Parameters

           1:  MX -- INTEGER                                          Input

           2:  MY -- INTEGER                                          Input
               On entry: MX and MY must specify m  and m  respectively,
                                                 x      y
               the number of points along the x and y axis that define the
               rectangular grid. Constraint: MX >= 1 and MY >= 1.

           3:  PX -- INTEGER                                          Input

           4:  PY -- INTEGER                                          Input
               On entry: PX and PY must specify the total number of knots
               associated with the variables x and y respectively. They are
               such that PX-8 and PY-8 are the corresponding numbers of
               interior knots. Constraint: PX >= 8 and PY >= 8.

           5:  X(MX) -- DOUBLE PRECISION array                        Input

           6:  Y(MY) -- DOUBLE PRECISION array                        Input
               On entry: X and Y must contain x , for q=1,2,...,m , and y ,
                                               q                 x       r
               for r=1,2,...,m , respectively. These are the x and y co-
                              y
               ordinates that define the rectangular grid of points at
               which values of the spline are required. Constraint: X and Y
               must satisfy

               LAMDA(4) <= X(q) < X(q+1) <= LAMDA(PX-3), for q=1,2,...,m -1
                                                                        x
               and

               MU(4) <= Y(r) < Y(r+1) <= MU(PY-3), for r=1,2,...,m -1.
                                                                  y

               The spline representation is not valid outside these
               intervals.

           7:  LAMDA(PX) -- DOUBLE PRECISION array                    Input

           8:  MU(PY) -- DOUBLE PRECISION array                       Input
               On entry: LAMDA and MU must contain the complete sets of
               knots {(lambda)} and {(mu)} associated with the x and y
               variables respectively. Constraint: the knots in each set
               must be in non-decreasing order, with LAMDA(PX-3) > LAMDA(4)
               and MU(PY-3) > MU(4).

           9:  C((PX-4)*(PY-4)) -- DOUBLE PRECISION array             Input
               On entry: C((PY-4)*(i-1)+j) must contain the coefficient
               c   described in Section 3, for i=1,2,...,PX-4;
                ij
               j=1,2,...,PY-4.

          10:  FF(MX*MY) -- DOUBLE PRECISION array                   Output
               On exit: FF(MY*(q-1)+r) contains the value of the spline at
               the point (x ,y ), for q=1,2,...,m ; r=1,2,...,m .
                           q  r                  x             y

          11:  WRK(LWRK) -- DOUBLE PRECISION array                Workspace

          12:  LWRK -- INTEGER                                        Input
               On entry:
               the dimension of the array WRK as declared in the
               (sub)program from which E02DFF is called.
               Constraint: LWRK >= min(NWRK1,NWRK2), where NWRK1=4*MX+PX,
               NWRK2=4*MY+PY.

          13:  IWRK(LIWRK) -- INTEGER array                       Workspace

          14:  LIWRK -- INTEGER                                       Input
               On entry:
               the dimension of the array IWRK as declared in the
               (sub)program from which E02DFF is called.
               Constraint: LIWRK >= MY + PY - 4 if NWRK1 > NWRK2, or MX +
               PX - 4 otherwise, where NWRK1 and NWRK2 are as defined in
               the description of argument LWRK.

          15:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry MX < 1,

               or       MY < 1,

               or       PY < 8,

               or       PX < 8.

          IFAIL= 2
               On entry LWRK is too small,

               or       LIWRK is too small.

          IFAIL= 3
               On entry the knots in array LAMDA, or those in array MU, are
               not in non-decreasing order, or LAMDA(PX-3) <= LAMDA(4), or
               MU(PY-3) <= MU(4).

          IFAIL= 4
               On entry the restriction LAMDA(4) <= X(1) <... < X(MX) <=
               LAMDA(PX-3), or the restriction MU(4) <= Y(1) <... < Y(MY)
               <= MU(PY-3), is violated.

          7. Accuracy

          The method used to evaluate the B-splines is numerically stable,
          in the sense that each computed value of s(x ,y ) can be regarded
                                                      r  r
          as the value that would have been obtained in exact arithmetic
          from slightly perturbed B-spline coefficients. See Cox [2] for
          details.

          8. Further Comments

          Computation time is approximately proportional to m m +4(m +m ).
                                                             x y    x  y

          9. Example

          This program reads in knot sets LAMDA(1),..., LAMDA(PX) and MU(1)
          ,..., MU(PY), and a set of bicubic spline coefficients c  .
                                                                  ij
          Following these are values for m  and the x co-ordinates x , for
                                          x                         q
          q=1,2,...,m , and values for m  and the y co-ordinates y , for
                     x                  y                         r
          r=1,2,...,m , defining the grid of points on which the spline is
                     y
          to be evaluated.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02gaf}{NAG On-line Documentation: e02gaf}
\beginscroll
\begin{verbatim}



     E02GAF(3NAG)      Foundation Library (12/10/92)      E02GAF(3NAG)



          E02 -- Curve and Surface Fitting                           E02GAF
                  E02GAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02GAF calculates an l  solution to an over-determined system of
                                1
          linear equations.

          2. Specification

                 SUBROUTINE E02GAF (M, A, LA, B, NPLUS2, TOLER, X, RESID,
                1                   IRANK, ITER, IWORK, IFAIL)
                 INTEGER          M, LA, NPLUS2, IRANK, ITER, IWORK(M),
                1                 IFAIL
                 DOUBLE PRECISION A(LA,NPLUS2), B(M), TOLER, X(NPLUS2),
                1                 RESID

          3. Description

          Given a matrix A with m rows and n columns (m>=n) and a vector b
          with m elements, the routine calculates an l  solution to the
                                                      1
          over-determined system of equations

                                        Ax=b.

          That is to say, it calculates a vector x, with n elements, which
          minimizes the l -norm (the sum of the absolute values) of the
                         1
          residuals

                                         m
                                         --
                                   r(x)= >  |r |,
                                         --   i
                                         i=1

          where the residuals r  are given by
                               i

                                  n
                                  --
                           r =b - >  a  x ,   i=1,2,...,m.
                            i  i  --  ij j
                                  j=1

          Here a   is the element in row i and column j of A, b  is the ith
                ij                                             i
          element of b and x  the jth element of x. The matrix A need not
                            j
          be of full rank.

          Typically in applications to data fitting, data consisting of m
          points with co-ordinates (t ,y ) are to be approximated in the l
                                     i  i                                 1
          -norm by a linear combination of known functions (phi) (t),
                                                                j

             (alpha) (phi) (t)+(alpha) (phi) (t)+...+(alpha) (phi) (t).
                    1     1           2     2               n     n

          This is equivalent to fitting an l  solution to the over-
                                            1
          determined system of equations

                       n
                       --
                       >  (phi) (t )(alpha) =y ,   i=1,2,...,m.
                       --      j  i        j  i
                       j=1

          Thus if, for each value of i and j, the element a   of the matrix
                                                           ij
          A in the previous paragraph is set equal to the value of
          (phi) (t ) and b  is set equal to y , the solution vector x will
               j  i       i                  i
          contain the required values of the (alpha) . Note that the
                                                    j
          independent variable t above can, instead, be a vector of several
          independent variables (this includes the case where each (phi)
                                                                        i
          is a function of a different variable, or set of variables).

          The algorithm is a modification of the simplex method of linear
          programming applied to the primal formulation of the l  problem
                                                                1
          (see Barrodale and Roberts [1] and [2]). The modification allows
          several neighbouring simplex vertices to be passed through in a
          single iteration, providing a substantial improvement in
          efficiency.

          4. References

          [1]   Barrodale I and Roberts F D K (1973) An Improved Algorithm
                for Discrete \\ll  Linear Approximation. SIAM J. Numer.
                                 1
                Anal. 10 839--848.

          [2]   Barrodale I and Roberts F D K (1974) Solution of an
                Overdetermined System of Equations in the \\ll -norm. Comm.
                                                              1
                ACM. 17, 6 319--320.

          5. Parameters

           1:  M -- INTEGER                                           Input
               On entry: the number of equations, m (the number of rows of
               the matrix A). Constraint: M >= n >= 1.

           2:  A(LA,NPLUS2) -- DOUBLE PRECISION array          Input/Output
               On entry: A(i,j) must contain a  , the element in the ith
                                              ij
               row and jth column of the matrix A, for i=1,2,...,m and
               j=1,2,...,n. The remaining elements need not be set. On
               exit: A contains the last simplex tableau generated by the
               simplex method.

           3:  LA -- INTEGER                                          Input
               On entry:
               the first dimension of the array A as declared in the
               (sub)program from which E02GAF is called.
               Constraint: LA >= M + 2.

           4:  B(M) -- DOUBLE PRECISION array                  Input/Output
               On entry: b , the ith element of the vector b, for
                          i
               i=1,2,...,m. On exit: the ith residual r  corresponding to
                                                       i
               the solution vector x, for i=1,2,...,m.

           5:  NPLUS2 -- INTEGER                                      Input
               On entry: n+2, where n is the number of unknowns (the
               number of columns of the matrix A). Constraint: 3 <= NPLUS2
               <= M + 2.

           6:  TOLER -- DOUBLE PRECISION                              Input
               On entry: a non-negative value. In general TOLER specifies
               a threshold below which numbers are regarded as zero. The
                                                       2/3
               recommended threshold value is (epsilon)    where (epsilon)
               is the machine precision. The recommended value can be
               computed within the routine by setting TOLER to zero. If
               premature termination occurs a larger value for TOLER may
               result in a valid solution. Suggested value: 0.0.

           7:  X(NPLUS2) -- DOUBLE PRECISION array                   Output
               On exit: X(j) contains the jth element of the solution
               vector x, for j=1,2,...,n. The elements X(n+1) and X(n+2)
               are unused.

           8:  RESID -- DOUBLE PRECISION                             Output
               On exit: the sum of the absolute values of the residuals
               for the solution vector x.

           9:  IRANK -- INTEGER                                      Output
               On exit: the computed rank of the matrix A.

          10:  ITER -- INTEGER                                       Output
               On exit: the number of iterations taken by the simplex
               method.

          11:  IWORK(M) -- INTEGER array                          Workspace

          12:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          IFAIL= 1
               An optimal solution has been obtained but this may not be
               unique.

          IFAIL= 2
               The calculations have terminated prematurely due to rounding
               errors. Experiment with larger values of TOLER or try
               scaling the columns of the matrix (see Section 8).

          IFAIL= 3
               On entry NPLUS2 < 3,

               or       NPLUS2 > M + 2,

               or       LA < M + 2.

          7. Accuracy

          Experience suggests that the computational accuracy of the
          solution x is comparable with the accuracy that could be obtained
          by applying Gaussian elimination with partial pivoting to the n
          equations satisfied by this algorithm (i.e., those equations with
          zero residuals). The accuracy therefore varies with the
          conditioning of the problem, but has been found generally very
          satisfactory in practice.

          8. Further Comments

          The effects of m and n on the time and on the number of
          iterations in the Simplex Method vary from problem to problem,
          but typically the number of iterations is a small multiple of n
          and the total time taken by the routine is approximately
                            2
          proportional to mn .

          It is recommended that, before the routine is entered, the
          columns of the matrix A are scaled so that the largest element in
          each column is of the order of unity. This should improve the
          conditioning of the matrix, and also enable the parameter TOLER
          to perform its correct function. The solution x obtained will
          then, of course, relate to the scaled form of the matrix. Thus if
          the scaling is such that, for each j=1,2,...,n, the elements of
          the jth column are multiplied by the constant k , the element x
                                                         j               j
          of the solution vector x must be multiplied by k  if it is
                                                          j
          desired to recover the solution corresponding to the original
          matrix A.

          9. Example

          Suppose we wish to approximate a set of data by a curve of the
          form

                   t   -t
               y=Ke +Le  +M

          where K, L and M are unknown. Given values y  at 5 points t  we
                                                      i              i
          may form the over-determined set of equations for K, L and M

                            x    -x
                             i     i
                           e  K+e   L+M=y ,  i=1,2,...,5.
                                         i

          E02GAF is used to solve these in the l  sense.
                                                1

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe02zaf}{NAG On-line Documentation: e02zaf}
\beginscroll
\begin{verbatim}



     E02ZAF(3NAG)      Foundation Library (12/10/92)      E02ZAF(3NAG)



          E02 -- Curve and Surface Fitting                           E02ZAF
                  E02ZAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E02ZAF sorts two-dimensional data into rectangular panels.

          2. Specification

                 SUBROUTINE E02ZAF (PX, PY, LAMDA, MU, M, X, Y, POINT,
                1                   NPOINT, ADRES, NADRES, IFAIL)
                 INTEGER          PX, PY, M, POINT(NPOINT), NPOINT, ADRES
                1                 (NADRES), NADRES, IFAIL
                 DOUBLE PRECISION LAMDA(PX), MU(PY), X(M), Y(M)

          3. Description

          A set of m data points with rectangular Cartesian co-ordinates
          x ,y  are sorted into panels defined by lines parallel to the y
           r  r
          and x axes. The intercepts of these lines on the x and y axes are
          given in LAMDA(i), for i=5,6,...,PX-4 and MU(j), for
          j=5,6,...,PY-4, respectively. The subroutine orders the data so
          that all points in a panel occur before data in succeeding
          panels, where the panels are numbered from bottom to top and then
          left to right, with the usual arrangement of axes, as shown in
          the diagram. Within a panel the points maintain their original
          order.

                   Please see figure in printed Reference Manual

          A data point lying exactly on one or more panel sides is taken to
          be in the highest-numbered panel adjacent to the point. The
          subroutine does not physically rearrange the data, but provides
          the array POINT which contains a linked list for each panel,
          pointing to the data in that panel. The total number of panels is
          (PX-7)*(PY-7).

          4. References

          None.

          5. Parameters

           1:  PX -- INTEGER                                          Input

           2:  PY -- INTEGER                                          Input

               On entry: PX and PY must specify eight more than the number
               of intercepts on the x axis and y axis, respectively.
               Constraint: PX >= 8 and PY >= 8.

           3:  LAMDA(PX) -- DOUBLE PRECISION array                    Input
               On entry: LAMDA(5) to LAMDA(PX-4) must contain, in non-
               decreasing order, the intercepts on the x axis of the sides
               of the panels parallel to the y axis.

           4:  MU(PY) -- DOUBLE PRECISION array                       Input
               On entry: MU(5) to MU(PY-4) must contain, in non-decreasing
               order, the intercepts on the y axis of the sides of the
               panels parallel to the x axis.

           5:  M -- INTEGER                                           Input
               On entry: the number m of data points.

           6:  X(M) -- DOUBLE PRECISION array                         Input

           7:  Y(M) -- DOUBLE PRECISION array                         Input
               On entry: the co-ordinates of the rth data point (x ,y ),
                                                                  r  r
               for r=1,2,...,m.

           8:  POINT(NPOINT) -- INTEGER array                        Output
               On exit: for i = 1,2,...,NADRES, POINT(m+i) = I1 is the
               index of the first point in panel i, POINT(I1) = I2 is the
               index of the second point in panel i and so on.

               POINT(IN) = 0 indicates that X(IN),Y(IN) was the last point
               in the panel.

               The co-ordinates of points in panel i can be accessed in
               turn by means of the following instructions:
                  IN = M + I
               10 IN = POINT(IN)
                  IF (IN.EQ. 0) GOTO 20
                  XI = X(IN)
                  YI = Y(IN)
                  .
                  .
                  .
                  GOTO 10
               20...

           9:  NPOINT -- INTEGER                                      Input
               On entry:
               the dimension of the array POINT as declared in the
               (sub)program from which E02ZAF is called.
               Constraint: NPOINT >= M + (PX-7)*(PY-7).

          10:  ADRES(NADRES) -- INTEGER array                     Workspace

          11:  NADRES -- INTEGER                                      Input
               On entry: the value (PX-7)*(PY-7), the number of panels
               into which the (x,y) plane is divided.

          12:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. For users not
               familiar with this parameter (described in the Essential
               Introduction) the recommended value is 0.

               On exit: IFAIL = 0 unless the routine detects an error (see
               Section 6).

          6. Error Indicators and Warnings

          Errors detected by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               The intercepts in the array LAMDA, or in the array MU, are
               not in non-decreasing order.

          IFAIL= 2
               On entry PX < 8,

               or       PY < 8,

               or       M <= 0,

               or       NADRES /= (PX-7)*(PY-7),

               or       NPOINT < M + (PX-7)*(PY-7).

          7. Accuracy

          Not applicable.

          8. Further Comments

          The time taken by this routine is approximately proportional to
          m*log(NADRES).

          This subroutine was written to sort two dimensional data in the
          manner required by routines E02DAF and E02DBF(*). The first 9
          parameters of E02ZAF are the same as the parameters in E02DAF and
          E02DBF(*) which have the same name.

          9. Example

          This example program reads in data points and the intercepts of
          the panel sides on the x and y axes; it calls E02ZAF to set up
          the index array POINT; and finally it prints the data points in
          panel order.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04}{NAG On-line Documentation: e04}
\beginscroll
\begin{verbatim}



     E04(3NAG)         Foundation Library (12/10/92)         E04(3NAG)



          E04 -- Minimizing or Maximizing a Function    Introduction -- E04
                                    Chapter E04
                        Minimizing or Maximizing a Function

          Contents of this Introduction:

          1.     Scope of the Chapter

          2.     Background to the Problems

          2.1.   Types of Optimization Problems

          2.1.1. Unconstrained minimization

          2.1.2. Nonlinear least-squares problems

          2.1.3. Minimization subject to bounds on the variables

          2.1.4. Minimization subject to linear constraints

          2.1.5. Minimization subject to nonlinear constraints
          
          2.2.   Geometric Representation and Terminology

          2.2.1. Gradient vector

          2.2.2. Hessian matrix

          2.2.3. Jacobian matrix; matrix of constraint normals

          2.3.   Sufficient Conditions for a Solution

          2.3.1. Unconstrained minimization

          2.3.2. Minimization subject to bounds on the variables

          2.3.3. Linearly-constrained minimization

          2.3.4. Nonlinearly-constrained minimization

          2.4.   Background to Optimization Methods

          2.4.1. Methods for unconstrained optimization

          2.4.2. Methods for nonlinear least-squares problems

          2.4.3. Methods for handling constraints

          2.5.   Scaling

          2.5.1. Transformation of variables

          2.5.2. Scaling the objective function

          2.5.3. Scaling the constraints

          2.6.   Analysis of Computed Results

          2.6.1. Convergence criteria

          2.6.2. Checking results

          2.6.3. Monitoring progress

          2.6.4. Confidence intervals for least-squares solutions

          2.7.   References

          3.     Recommendations on Choice and Use of Routines

          3.1.   Choice of Routine

          3.2.   Service Routines

          3.3.   Function Evaluations at Infeasible Points

          3.4.   Related Problems



          1. Scope of the Chapter

          An optimization problem involves minimizing a function (called
          the objective function) of several variables, possibly subject to
          restrictions on the values of the variables defined by a set of
          constraint functions. The routines in the NAG Foundation Library
          are concerned with function minimization only, since the problem
          of maximizing a given function can be transformed into a
          minimization problem simply by multiplying the function by -1.

          This introduction is only a brief guide to the subject of
          optimization designed for the casual user. Anyone with a
          difficult or protracted problem to solve will find it beneficial
          to consult a more detailed text, such as Gill et al [5] or
          Fletcher [3].

          Readers who are unfamiliar with the mathematics of the subject
          may find some sections difficult at first reading; if so, they
          should concentrate on Sections 2.1, 2.2, 2.5, 2.6 and 3.

          2. Background to the Problems

          2.1. Types of Optimization Problems

          Solution of optimization problems by a single, all-purpose,
          method is cumbersome and inefficient. Optimization problems are
          therefore classified into particular categories, where each
          category is defined by the properties of the objective and
          constraint functions, as illustrated by some examples below.

          Properties of Objective   Properties of Constraints
          Function

          Nonlinear                 Nonlinear

          Sums of squares of        Sparse linear
          nonlinear functions

          Quadratic                 Linear

          Sums of squares of linear Bounds
          functions

          Linear                    None

          For instance, a specific problem category involves the
          minimization of a nonlinear objective function subject to bounds
          on the variables. In the following sections we define the
          particular categories of problems that can be solved by routines
          contained in this Chapter.

          2.1.1.  Unconstrained minimization

          In unconstrained minimization problems there are no constraints
          on the variables. The problem can be stated mathematically as
          follows:

                                    minimize F(x)
                                     x

                         n                           T
          where x is in R , that is, x=(x ,x ,...,x ) .
                                        1  2      n

          2.1.2.  Nonlinear least-squares problems

          Special consideration is given to the problem for which the
          function to be minimized can be expressed as a sum of squared
          functions. The least-squares problem can be stated mathematically
          as follows:

                                 {     m       }
                                 { T   --  2   }            n
                        minimize {f f= >  f (x)},  x is in R
                         x       {     --  i   }
                                 {     i=1     }

          where the ith element of the m-vector f is the function f (x).
                                                                   i

          2.1.3.  Minimization subject to bounds on the variables

          These problems differ from the unconstrained problem in that at
          least one of the variables is subject to a simple restriction on
          its value, e.g.x <=10, but no constraints of a more general form
                          5
          are present.

          The problem can be stated mathematically as follows:

                                                       n
                              minimize F(x),  x is in R
                               x

          subject to l <=x <=u , i=1,2,...,n.
                      i   i   i

          This format assumes that upper and lower bounds exist on all the
          variables. By conceptually allowing u =infty and l =-infty all
                                               i            i
          the variables need not be restricted.

          2.1.4.  Minimization subject to linear constraints

          A general linear constraint is defined as a constraint function
          that is linear in more than one of the variables, e.g. 3x +2x >=4
                                                                   1   2
          The various types of linear constraint are reflected in the
          following mathematical statement of the problem:


                                                       n
                              minimize F(x),  x is in R
                               x

          subject to the

                             T
             equality       a x=b        i=1,2,...,m ;
             constraints:    i   i                1

                             T
             inequality     a x>=b       i=m +1,m +2,...,m ;
             constraints:    i    i         1    1        2

                             T
                            a x<=b       i=m +1,m +2,...,m ;
                             i    i         2    2        3

                                 T
             range          s <=a x<=t   i=m +1,m +2,...,m ;
             constraints:    j   i    j     3    3        4
                                         j=1,2,...,m -m ;
                                                    4  3

             bounds         l <=x <=u    i=1,2,...,n
             constraints:    i   i   i

          where each a  is a vector of length n; b , s  and t  are constant
                      i                           i   j      j
          scalars; and any of the categories may be empty.

          Although the bounds on x  could be included in the definition of
                                  i
          general linear constraints, we prefer to distinguish between them
          for reasons of computational efficiency.

          If F(x) is a linear function, the linearly-constrained problem is
          termed a linear programming problem (LP problem); if F(x) is a
          quadratic function, the problem is termed a quadratic programming
          problem (QP problem). For further discussion of LP and QP
          problems, including the dual formulation of such problems, see
          Dantzig [2].

          2.1.5.  Minimization subject to nonlinear constraints

          A problem is included in this category if at least one constraint
                                       2
          function is nonlinear, e.g. x +x +x -2>=0. The mathematical
                                       1  3  4
          statement of the problem is identical to that for the linearly-
          constrained case, except for the addition of the following
          constraints:

             equality       c (x)=0        i=1,2,...,m ;
             constraints:    i                        5

             inequality     c (x)>=0       i=m +1,m +2,...,m ;
             constraints:    i                5    5        6

             range          v <=c (x)<=w   i=m +1,m +2,...,m ,
             constraints:    j   i      j     6    6        7
                                           j=1,2,...,m -m
                                                      7  6

          where each c  is a nonlinear function; v  and w  are constant
                      i                           j      j
          scalars; and any category may be empty. Note that we do not
          include a separate category for constraints of the form c (x)<=0,
                                                                   i
          since this is equivalent to -c (x)>=0.
                                        i

          2.2. Geometric Representation and Terminology

          To illustrate the nature of optimization problems it is useful to
          consider the following example in two dimensions

                                 x
                                  1   2   2
                           F(x)=e  (4x +2x +4x x +2x +1).
                                      1   2   1 2   2

          (This function is used as the example function in the
          documentation for the unconstrained routines.)


                                     Figure 1
                   Please see figure in printed Reference Manual

          Figure 1 is a contour diagram of F(x). The contours labelled
          F ,F ,...,F  are isovalue contours, or lines along which the
           0  1      4
                                                                   *
          function F(x) takes specific constant values. The point x  is a
                                                                *
          local unconstrained minimum, that is, the value of F(x ) is less
          than at all the neighbouring points. A function may have several
          such minima. The lowest of the local minima is termed a global
                                                            *
          minimum. In the problem illustrated in Figure 1, x  is the only
                                   

          local minimum. The point x is said to be a saddle point because
          it is a minimum along the line AB, but a maximum along CD.

          If we add the constraint x >=0 to the problem of minimizing F(x),
                                    1
          the solution remains unaltered. In Figure 1 this constraint is
          represented by the straight line passing through x =0, and the
                                                            1
          shading on the line indicates the unacceptable region. The region
              n
          in R  satisfying the constraints of an optimization problem is
          termed the feasible region. A point satisfying the constraints is
          defined as a feasible point.

          If we add the nonlinear constraint x +x -x x -1.5>=0, represented
                                              1  2  1 2
                                                       *
          by the curved shaded line in Figure 1, then x  is not a feasible
                                                                ^
          point. The solution of the new constrained problem is x, the
          feasible point with the smallest function value.

          2.2.1.  Gradient vector

          The vector of first partial derivatives of F(x) is called the
          gradient vector, and is denoted by g(x), i.e.,

                             [ ddF(x)  ddF(x)      ddF(x)]T
                        g(x)=[ ------, ------,..., ------] .
                             [  ddx     ddx         ddx  ]
                             [     1       2           n ]

          For the function illustrated in Figure 1,

                                   [      x          ]
                                   [       1         ]
                                   [F(x)+e  (8x +4x )]
                                   [           1   2 ]
                                   [ x               ]
                                   [  1              ]
                              g(x)=[e  (4x +4x +2)   ].
                                   [      2   1      ]

          The gradient vector is of importance in optimization because it
          must be zero at an unconstrained minimum of any function with
          continuous first derivatives.

          2.2.2.  Hessian matrix

          The matrix of second partial derivatives of a function is termed
          its Hessian matrix. The Hessian matrix of F(x) is denoted by G(x)
                                                  2
          and its (i,j)th element is given by dd F(x)/ddx ddx . If F(x)
                                                           i   j
          has continuous second derivatives, then G(x) must be positive
          semi-definite at any unconstrained minimum of F.

          2.2.3.  Jacobian matrix; matrix of constraint normals

          In nonlinear least-squares problems, the matrix of first partial
          derivatives of the vector-valued function f(x) is termed the
          Jacobian matrix of f(x) and its (i,j)th component is ddf /ddx .
                                                                  i    j

          The vector of first partial derivatives of the constraint c (x)
                                                                     i
          is denoted by

                                 [ ddc (x)      ddc (x)]T
                                 [    i            i   ]
                           a (x)=[ -------,..., -------] .
                            i    [  ddx          ddx   ]
                                 [     1            n  ]

                      ^                ^
          At a point, x, the vector a (x) is orthogonal (normal) to the
                                     i
                                                    ^
          isovalue contour of c (x) passing through x; this relationship is
                               i
          illustrated for a two-dimensional function in Figure 2.


                                     Figure 2
                   Please see figure in printed Reference Manual

          The matrix whose columns are the vectors {a } is termed the
                                                     i
          matrix of constraint normals. Note that if c (x) is a linear
                                                      i
                                T
          constraint involving a x, then its vector of first partial
                                i
          derivatives is simply the vector a .
                                            i

          2.3. Sufficient Conditions for a Solution

          All nonlinear functions will be assumed to have continuous second
          derivatives in the neighbourhood of the solution.

          2.3.1.  Unconstrained minimization

                                                                 *
          The following conditions are sufficient for the point x  to be an
          unconstrained local minimum of F(x):

                      *
          (i)   |||g(x )|||=0; and

                   *
          (ii)  G(x ) is positive-definite,

          where |||g||| denotes the Euclidean length of g.

          2.3.2.  Minimization subject to bounds on the variables

          At the solution of a bounds-constrained problem, variables which
          are not on their bounds are termed free variables. If it is known
          in advance which variables are on their bounds at the solution,
          the problem can be solved as an unconstrained problem in just the
          free variables; thus, the sufficient conditions for a solution
          are similar to those for the unconstrained case, applied only to
          the free variables.

                                                      *
          Sufficient conditions for a feasible point x  to be the solution
          of a bound-constrained problem are as follows:

                      *
          (i)   |||g(x )|||=0; and

                   *
          (ii)  G(x ) is positive-definite; and

                    *               *
          (iii) g (x )<0,x =u ; g (x )>0,x =l ,
                 j        j  j   j        j  j

                

          where g(x) is the gradient of F(x) with respect to the free
                         

          variables, and G(x) is the Hessian matrix of F(x) with respect to
          the free variables. The extra condition (iii) ensures that F(x)
          cannot be reduced by moving off one or more of the bounds.

          2.3.3.  Linearly-constrained minimization

          For the sake of simplicity, the following description does not
          include a specific treatment of bounds or range constraints,
          since the results for general linear inequality constraints can
          be applied directly to these cases.

                         *
          At a solution x , of a linearly-constrained problem, the
          constraints which hold as equalities are called the active or
          binding constraints. Assume that there are t active constraints
                           *          ^
          at the solution x , and let A denote the matrix whose columns are
                                                                         ^
          the columns of A corresponding to the active constraints, with b
          the vector similarly obtained from b; then

                                       ^T * ^
                                       A x =b.

          The matrix Z is defined as an n by (n-t) matrix satisfying:

                                   ^T        T
                                   A Z=0;   Z Z=I.

          The columns of Z form an orthogonal basis for the set of vectors
                                       ^
          orthogonal to the columns of A.

          Define

                      T
               g (x)=Z g(x), the projected gradient vector of F(x);
                z

                      T
               G (x)=Z G(x)Z, the projected Hessian matrix of F(x).
                z

          At the solution of a linearly-constrained problem, the projected
          gradient vector must be zero, which implies that the gradient
                    *
          vector g(x ) can be written as a linear combination of the
                                     t
                     ^           *   --          ^  ^
          columns of A, i.e., g(x )= >  (lambda) a =A(lambda). The scalar
                                     --         i i
                                     i=1
          (lambda)  is defined as the Lagrange multiplier corresponding to
                  i
          the ith active constraint. A simple interpretation of the ith
          Lagrange multiplier is that it gives the gradient of F(x) along
          the ith active constraint normal; a convenient definition of the
          Lagrange multiplier vector (although not a recommended method for
          computation) is:


                                        ^T^ -1^T   *
                              (lambda)=(A A)  A g(x ).

                                     *
          Sufficient conditions for x  to be the solution of a linearly-
          constrained problem are:

                 *                  ^T * ^
          (i)   x  is feasible, and A x =b; and

                       *                            *  ^
          (ii)  |||g (x )|||=0, or equivalently, g(x )=A(lambda); and
                    z

                    *
          (iii) G (x ) is positive-definite; and
                 z

          (iv)  (lambda) >0 if (lambda)  corresponds to a constraint
                        i              i
                ^T *  ^
                a x >=b ;
                 i     i

               (lambda) <0 if (lambda)  corresponds to a constraint
                       i              i
               ^T *  ^
               a x <=b .
                i     i

               The sign of (lambda)  is immaterial for equality
                                   i
               constraints, which by definition are always active.

          2.3.4.  Nonlinearly-constrained minimization

          For nonlinearly-constrained problems, much of the terminology is
          defined exactly as in the linearly-constrained case. The set of
          active constraints at x again means the set of constraints that
                                                                     ^
          hold as equalities at x, with corresponding definitions of c and
          ^             ^
          A: the vector c(x) contains the active constraint functions, and
                         ^
          the columns of A(x) are the gradient vectors of the active
                                                           ^
          constraints. As before, Z is defined in terms of A(x) as a matrix
          such that:


                                   ^T        T
                                   A Z=0;   Z Z=I

          where the dependence on x has been suppressed for compactness.

                                                             T
          The projected gradient vector g (x) is the vector Z g(x). At the
                                         z
                    *
          solution x  of a nonlinearly-constrained problem, the projected
          gradient must be zero, which implies the existence of Lagrange
          multipliers corresponding to the active constraints, i.e.,
             *  ^  *
          g(x )=A(x )(lambda).

          The Lagrangian function is given by:


                                                     T^
                          L(x,(lambda))=F(x)-(lambda) c(x).

          We define g (x) as the gradient of the Lagrangian function; G (x)
                     L                                                 L
                                     ^
          as its Hessian matrix, and G (x) as its projected Hessian matrix,
                                      L
                ^   T
          i.e., G =Z G Z.
                 L    L
                                     *
          Sufficient conditions for x  to be a solution of nonlinearly-
          constrained problem are:

                 *                  ^  *
          (i)   x  is feasible, and c(x )=0; and

                       *                             *  ^  *
          (ii)  |||g (x )|||=0, or, equivalently, g(x )=A(x )(lambda); and
                    z

                ^   *
          (iii) G (x ) is positive-definite; and
                 L

          (iv)  (lambda) >0 if (lambda)  corresponds to a constraint of the
                        i              i
                     ^
                form c >=0; the sign of (lambda)  is immaterial for an
                      i                         i
                equality constraint.

          Note that condition (ii) implies that the projected gradient of
                                                        *
          the Lagrangian function must also be zero at x , since the
                          T                        ^  *
          application of Z  annihilates the matrix A(x ).

          2.4. Background to Optimization Methods

          All the algorithms contained in this Chapter generate an
                               (k)                                  *
          iterative sequence {x   } that converges to the solution x  in
          the limit, except for some special problem categories (i.e.,
          linear and quadratic programming). To terminate computation of
          the sequence, a convergence test is performed to determine
          whether the current estimate of the solution is an adequate
          approximation. The convergence tests are discussed in Section 2.6

                                                     (k)
          Most of the methods construct a sequence {x   } satisfying:

                              (k+1)  (k)        (k) (k)
                             x     =x   +(alpha)   p   ,

                            (k)
          where the vector p    is termed the direction of search, and
                 (k)                                          (k)
          (alpha)    is the steplength. The steplength (alpha)    is chosen
                     (k+1)     (k)
          so that F(x     )<F(x   ).


          2.4.1.  Methods for unconstrained optimization

          The distinctions among methods arise primarily from the need to
          use varying levels of information about derivatives of F(x) in
          defining the search direction. We describe three basic approaches
          to unconstrained problems, which may be extended to other problem
          categories. Since a full description of the methods would fill
          several volumes, the discussion here can do little more than
          allude to the processes involved, and direct the reader to other
          sources for a full explanation.

          (a)   Newton-type Methods (Modified Newton Methods)

                                                              (k)
                Newton-type methods use the Hessian matrix G(x   ), or a
                                                      (k)
                finite difference approximation to G(x   ), to define the
                search direction. The routines in the Library either
                                                                      (k)
                require a subroutine that computes the elements of G(x   ),
                                       (k)
                or they approximate G(x   ) by finite differences.

                Newton-type methods are the most powerful methods available
                for general problems and will find the minimum of a
                quadratic function in one iteration. See Sections 4.4. and
                4.5.1. of Gill et al [5].

          (b)   Quasi-Newton Methods

                                                                (k)
                Quasi-Newton methods approximate the Hessian G(x   ) by a
                        (k)
                matrix B    which is modified at each iteration to include
                information obtained about the curvature of F along the
                latest search direction. Although not as robust as Newton-
                type methods, quasi-Newton methods can be more efficient
                           (k)
                because G(x   ) is not computed, or approximated by finite-
                differences. Quasi-Newton methods minimize a quadratic
                function in n iterations. See Section 4.5.2 of Gill et al
                [5].

          (c)   Conjugate-Gradient Methods

                Unlike Newton-type and quasi-Newton methods, conjugate
                gradient methods do not require the storage of an n by n
                matrix and so are ideally suited to solve large problems.
                Conjugate-gradient type methods are not usually as reliable
                or efficient as Newton-type, or quasi-Newton methods. See
                Section 4.8.3 of Gill et al [5].


          2.4.2.  Methods for nonlinear least-squares problems

          These methods are similar to those for unconstrained
          optimization, but exploit the special structure of the Hessian
          matrix to give improved computational efficiency.

          Since

                                         m
                                         --  2
                                   F(x)= >  f (x)
                                         --  i
                                         i=1

          the Hessian matrix G(x) is of the form

                                            m
                                     T      --
                          G(x)=2[J(x) J(x)+ >  f (x)G (x)],
                                            --  i    i
                                            i=1

          where J(x) is the Jacobian matrix of f(x), and G (x) is the
                                                          i
          Hessian matrix of f (x).
                             i

          In the neighbourhood of the solution, |||f(x)||| is often small
                             T
          compared to |||J(x) J(x)||| (for example, when f(x) represents
          the goodness of fit of a nonlinear model to observed data). In
                           T
          such cases, 2J(x) J(x) may be an adequate approximation to G(x),
          thereby avoiding the need to compute or approximate second
          derivatives of {f (x)}. See Section 4.7 of Gill et al [5].
                           i

          2.4.3.  Methods for handling constraints

          Bounds on the variables are dealt with by fixing some of the
          variables on their bounds and adjusting the remaining free
          variables to minimize the function. By examining estimates of the
          Lagrange multipliers it is possible to adjust the set of
          variables fixed on their bounds so that eventually the bounds
          active at the solution should be correctly identified. This type
          of method is called an active set method. One feature of such
          methods is that, given an initial feasible point, all
                          (k)
          approximations x    are feasible. This approach can be extended
          to general linear constraints. At a point, x, the set of
          constraints which hold as equalities being used to predict, or
          approximate, the set of active constraints is called the working
          set.

          Nonlinear constraints are more difficult to handle. If at all
          possible, it is usually beneficial to avoid including nonlinear
          constraints during the formulation of the problem. The methods
          currently implemented in the Library handle nonlinearly
          constrained problems either by transforming them into a sequence
          of bound constraint problems, or by transforming them into a
          sequence of quadratic programming problems. A feature of almost
                                                         (k)
          all methods for nonlinear constraints is that x    is not
          guaranteed to be feasible except in the limit, and this is
          certainly true of the routines currently in the Library. See
          Chapter 6, particularly Section 6.4 and Section 6.5 of Gill et al
          [5].

          Anyone interested in a detailed description of methods for
          optimization should consult the references.

          2.5. Scaling

          Scaling (in a broadly defined sense) often has a significant
          influence on the performance of optimization methods. Since
          convergence tolerances and other criteria are necessarily based
          on an implicit definition of 'small' and 'large', problems with
          unusual or unbalanced scaling may cause difficulties for some
          algorithms. Nonetheless, there are currently no scaling routines
          in the Library, although the position is under constant review.
          In light of the present state of the art, it is considered that
          sensible scaling by the user is likely to be more effective than
          any automatic routine. The following sections present some
          general comments on problem scaling.

          2.5.1.  Transformation of variables

          One method of scaling is to transform the variables from their
          original representation, which may reflect the physical nature of
          the problem, to variables that have certain desirable properties
          in terms of optimization. It is generally helpful for the
          following conditions to be satisfied:

          (i)   the variables are all of similar magnitude in the region of
                interest;

          (ii)  a fixed change in any of the variables results in similar
                changes in F(x). Ideally, a unit change in any variable
                produces a unit change in F(x);

          (iii) the variables are transformed so as to avoid cancellation
                error in the evaluation of F(x).

          Normally, users should restrict themselves to linear
          transformations of variables, although occasionally nonlinear
          transformations are possible. The most common such transformation
          (and often the most appropriate) is of the form

                                     x   =Dx   ,
                                      new   old

          where D is a diagonal matrix with constant coefficients. Our
          experience suggests that more use should be made of the
          transformation

                                    x   =Dx   +v,
                                     new   old

          where v is a constant vector.

          Consider, for example, a problem in which the variable x
                                                                  3
          represents the position of the peak of a Gaussian curve to be
          fitted to data for which the extreme values are 150 and 170;
          therefore x  is known to lie in the range 150--170. One possible
                     3
                                                    

          scaling would be to define a new variable x , given by
                                                     3


                                          x
                                           3
                                      x = ---.
                                       3  170

                                                                 

          A better transformation, however, is given by defining x  as
                                                                  3

                                         x -160
                                          3
                                     x = ------.
                                      3    10

          Frequently, an improvement in the accuracy of evaluation of F(x)
          can result if the variables are scaled before the routines to
          evaluate F(x) are coded. For instance, in the above problem just
          mentioned of Gaussian curve fitting, x  may always occur in terms
                                                3
          of the form (x -x ), where x  is a constant representing the mean
                        3  m          m
          peak position.

          2.5.2.  Scaling the objective function

          The objective function has already been mentioned in the
          discussion of scaling the variables. The solution of a given
          problem is unaltered if F(x) is multiplied by a positive
          constant, or if a constant value is added to F(x). It is
          generally preferable for the objective function to be of the
          order of unity in the region of interest; thus, if in the
                                                                +5
          original formulation F(x) is always of the order of 10   (say),
                                                           -5
          then the value of F(x) should be multiplied by 10   when
          evaluating the function within the optimization routines. If a
          constant is added or subtracted in the computation of F(x),
          usually it should be omitted - i.e., it is better to formulate
                   2  2                 2  2               2  2
          F(x) as x +x  rather than as x +x +1000 or even x +x +1. The
                   1  2                 1  2               1  2
          inclusion of such a constant in the calculation of F(x) can
          result in a loss of significant figures.

          2.5.3.  Scaling the constraints

          The solution of a nonlinearly-constrained problem is unaltered if
          the ith constraint is multiplied by a positive weight w . At the
                                                                 i
          approximation of the solution determined by a Library routine,
          the active constraints will not be satisfied exactly, but will
                                                 -8       -6
          have 'small' values (for example, c =10  , c =10  , etc.). In
                                             1        2
          general, this discrepancy will be minimized if the constraints
          are weighted so that a unit change in x produces a similar change
          in each constraint.

          A second reason for introducing weights is related to the effect
          of the size of the constraints on the Lagrange multiplier
          estimates and, consequently, on the active set strategy.
          Additional discussion is given in Gill et al [5].

          2.6. Analysis of Computed Results

          2.6.1.  Convergence criteria

          The convergence criteria inevitably vary from routine to routine,
          since in some cases more information is available to be checked
          (for example, is the Hessian matrix positive-definite?), and
          different checks need to be made for different problem categories
          (for example, in constrained minimization it is necessary to
          verify whether a trial solution is feasible). Nonetheless, the
          underlying principles of the various criteria are the same; in
          non-mathematical terms, they are:

                                  (k)
          (i)   is the sequence {x   } converging?

                                  (k)
          (ii)  is the sequence {F   } converging?

          (iii) are the necessary and sufficient conditions for the
                solution satisfied?

          The decision as to whether a sequence is converging is
          necessarily speculative. The criterion used in the present
          routines is to assume convergence if the relative change
          occurring between two successive iterations is less than some
          prescribed quantity. Criterion (iii) is the most reliable but
          often the conditions cannot be checked fully because not all the
          required information may be available.

          2.6.2.  Checking results

          Little a priori guidance can be given as to the quality of the
          solution found by a nonlinear optimization algorithm, since no
          guarantees can be given that the methods will always work.
          Therefore, it is necessary for the user to check the computed
          solution even if the routine reports success. Frequently a '
          solution' may have been found even when the routine does not
          report a success. The reason for this apparent contradiction is
          that the routine needs to assess the accuracy of the solution.
          This assessment is not an exact process and consequently may be
          unduly pessimistic. Any 'solution' is in general only an
          approximation to the exact solution, and it is possible that the
          accuracy specified by the user is too stringent.

          Further confirmation can be sought by trying to check whether or
          not convergence tests are almost satisfied, or whether or not
          some of the sufficient conditions are nearly satisfied. When it
          is thought that a routine has returned a non-zero value of IFAIL
          only because the requirements for 'success' were too stringent it
          may be worth restarting with increased convergence tolerances.

          For nonlinearly-constrained problems, check whether the solution
          returned is feasible, or nearly feasible; if not, the solution
          returned is not an adequate solution.

          Confidence in a solution may be increased by resolving the
          problem with a different initial approximation to the solution.
          See Section 8.3 of Gill et al [5] for further information.

          2.6.3.  Monitoring progress

          Many of the routines in the Chapter have facilities to allow the
          user to monitor the progress of the minimization process, and
          users are encouraged to make use of these facilities. Monitoring
          information can be a great aid in assessing whether or not a
          satisfactory solution has been obtained, and in indicating
          difficulties in the minimization problem or in the routine's
          ability to cope with the problem.

          The behaviour of the function, the estimated solution and first
          derivatives can help in deciding whether a solution is acceptable
          and what to do in the event of a return with a non-zero value of
          IFAIL.

          2.6.4.  Confidence intervals for least-squares solutions

          When estimates of the parameters in a nonlinear least-squares
          problem have been found, it may be necessary to estimate the
          variances of the parameters and the fitted function. These can be
          calculated from the Hessian of F(x) at the solution.

          In many least-squares problems, the Hessian is adequately
                                              T
          approximated at the solution by G=2J J (see Section 2.4.3). The
          Jacobian, J, or a factorization of J is returned by all the
          comprehensive least-squares routines and, in addition, a routine
          is supplied in the Library to estimate variances of the
          parameters following the use of most of the nonlinear least-
                                                 T
          squares routines, in the case that G=2J J is an adequate
          approximation.

          Let H be the inverse of G, and S be the sum of squares, both
                                     

          calculated at the solution x; an unbiased estimate of the
          variance of the ith parameter x  is
                                         i


                                           2S
                                   var x = ---H
                                        i  m-n ii

                                                               

          and an unbiased estimate of the covariance of x  and x  is
                                                         i      j

                                              2S
                                covar(x ,x )= ---H  .
                                       i  j   m-n ij

              *
          If x  is the true solution, then the 100(1-(beta)) confidence
                      

          interval on x is

                 

                /                          * 
          x -  / var x .t                <x <x
           i \/       i  (1-(beta)/2,m-n)  i  i
               

              /    
          +  / var x .t                ,   i=1,2,...,n
           \/       i  (1-(beta)/2,m-n)

          where t                 is the 100(1-(beta))/2 percentage point
                 (1-(beta)/2,m-n)
          of the t-distribution with m-n degrees of freedom.

          In the majority of problems, the residuals f , for i=1,2,...,m,
                                                      i
          contain the difference between the values of a model function
          (phi)(z,x) calculated for m different values of the independent
          variable z, and the corresponding observed values at these
          points. The minimization process determines the parameters, or
                                                                         

          constants x, of the fitted function (phi)(z,x). For any value, z,
          of the independent variable z, an unbiased estimate of the
          variance of (phi) is


                                 n   n
                             2S  --  -- [ dd(phi)] [ dd(phi)]
                  var (phi)= --- >   >  [ -------] [ -------] H  .
                             m-n --  -- [  ddx   ] [  ddx   ]  ij
                                 i=1 j=1[     i  ]z[     j  ]z

                                                                  

          The 100(1-(beta)) confidence interval on F at the point z is


                                                          *
          (phi)(z,x)-\/var (phi).t               < (phi)(z,x )
                                  ((beta)/2,m-n)
                                             

                             < (phi)(z,x) +\/var (phi).t              .
                                                       ((beta)/2,m-n)

          For further details on the analysis of least-squares solutions
          see Bard [1] and Wolberg [7].

          2.7. References

          [1]   Bard Y (1974) Nonlinear Parameter Estimation. Academic
                Press.

          [2]   Dantzig G B (1963) Linear Programming and Extensions.
                Princeton University Press.

          [3]   Fletcher R (1987) Practical Methods of Optimization. Wiley
                (2nd Edition).

          [4]   Gill P E and Murray W (eds) (1974) Numerical Methods for
                Constrained Optimization. Academic Press.

          [5]   Gill P E, Murray W and Wright M H (1981) Practical
                Optimization. Academic Press.

          [6]   Murray W (ed) (1972) Numerical Methods for Unconstrained
                Optimization. Academic Press.

          [7]   Wolberg J R (1967) Prediction Analysis. Van Nostrand.

          3. Recommendations on Choice and Use of Routines

          The choice of routine depends on several factors: the type of
          problem (unconstrained, etc.); the level of derivative
          information available (function values only, etc.); the
          experience of the user (there are easy-to-use versions of some
          routines); whether or not storage is a problem; and whether
          computational time has a high priority.

          3.1. Choice of Routine

          Routines are provided to solve the following types of problem:


          Nonlinear Programming                                      E04UCF
          Quadratic Programming                                      E04NAF
          Linear Programming                                         E04MBF
          Nonlinear Function                                         E04DGF
          (using 1st derivatives)
          Nonlinear Function, unconstrained or simple bounds         E04JAF
          (using function values only)
          Nonlinear least-squares                                    E04FDF
          (using function values only)
          Nonlinear least-squares                                    E04GCF
          (using function values and 1st derivatives)

          E04UCF can be used to solve unconstrained, bound-constrained and
          linearly-constrained problems.

          E04NAF can be used as a comprehensive linear programming solver;
          however, in most cases the easy-to-use routine E04MBFwill be
          adequate.

          E04MBF can be used to obtain a feasible point for a set of linear
          constraints.

          E04DGF can be used to solve large scale unconstrained problems.

          The routines can be used to solve problems in a single variable.

          3.2. Service Routines

          One of the most common errors in use of optimization routines is
          that the user's subroutines incorrectly evaluate the relevant
          partial derivatives. Because exact gradient information normally
          enhances efficiency in all areas of optimization, the user should
          be encouraged to provide analytical derivatives whenever
          possible. However, mistakes in the computation of derivatives can
          result in serious and obscure run-time errors, as well as
          complaints that the Library routines are incorrect.

          E04UCF incorporates a check on the gradients being supplied and
          users are encouraged to utilize this option; E04GCF also
          incorporates a call to a derivative checker.

          E04YCF estimates selected elements of the variance-covariance
          matrix for the computed regression parameters following the use
          of a nonlinear least-squares routine.

          3.3. Function Evaluations at Infeasible Points

          Users must not assume that the routines for constrained problems
          will require the objective function to be evaluated only at
          points which satisfy the constraints, i.e., feasible points. In
          the first place some of the easy-to-use routines call a service
          routine which will evaluate the objective function at the user-
          supplied initial point, and at neighbouring points (to check
          user-supplied derivatives or to estimate intervals for finite
          differencing). Apart from this, all routines will ensure that any
          evaluations of the objective function occur at points which
          approximately satisfy any simple bounds or linear constraints.
          Satisfaction of such constraints is only approximate because:

          (a)   routines which have a parameter FEATOL may allow such
                constraints to be violated by a margin specified by FEATOL;

          (b)   routines which estimate derivatives by finite differences
                may require function evaluations at points which just
                violate such constraints even though the current iteration
                just satisfies them.

          There is no attempt to ensure that the current iteration
          satisfies any nonlinear constraints. Users who wish to prevent
          their objective function being evaluated outside some known
          region (where it may be undefined or not practically computable),
          may try to confine the iteration within this region by imposing
          suitable simple bounds or linear constraints (but beware as this
          may create new local minima where these constraints are active).

          Note also that some routines allow the user-supplied routine to
          return a parameter (MODE) with a negative value to force an
          immediate clean exit from the minimization when the objective
          function cannot be evaluated.

          3.4. Related Problems

          Apart from the standard types of optimization problem, there are
          other related problems which can be solved by routines in this or
          other chapters of the Library.

          E04MBF can be used to find a feasible point for a set of linear
          constraints and simple bounds.

          Two routines in Chapter F04 solve linear least-squares problems,
                          m                         n
                          --      2                 --
          i.e., minimize  >  r (x)  where r (x)=b - >  a  x .
                          --  i            i     i  --  ij j
                          i=1                       j=1

          E02GAF solves an overdetermined system of linear equations in the
                                    m
                                    --
          l  norm, i.e., minimizes  >  |r (x)|, with r  as above.
           1                        --   i            i
                                    i=1


          E04 -- Minimizing or Maximizing a Function        Contents -- E04
          Chapter E04

          Minimizing or Maximizing a Function

          E04DGF  Unconstrained minimum, pre-conditioned conjugate gradient
                  algorithm, function of several variables using 1st
                  derivatives

          E04DJF  Read optional parameter values for E04DGF from external
                  file

          E04DKF  Supply optional parameter values to E04DGF

          E04FDF  Unconstrained minimum of a sum of squares, combined
                  Gauss-Newton and modified Newton algorithm using function
                  values only

          E04GCF  Unconstrained minimum of a sum of squares, combined
                  Gauss-Newton and quasi-Newton algorithm, using 1st
                  derivatives

          E04JAF  Minimum, function of several variables, quasi-Newton
                  algorithm, simple bounds, using function values only

          E04MBF  Linear programming problem

          E04NAF  Quadratic programming problem

          E04UCF  Minimum, function of several variables, sequential QP
                  method, nonlinear constraints, using function values and
                  optionally 1st derivatives

          E04UDF  Read optional parameter values for E04UCF from external
                  file

          E04UEF  Supply optional parameter values to E04UCF

          E04YCF  Covariance matrix for nonlinear least-squares problem

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04dgf}{NAG On-line Documentation: e04dgf}
\beginscroll
\begin{verbatim}

   
   
   
   E04DGF(3NAG)                 E04DGF                  E04DGF(3NAG)
   
   
   
        E04 -- Minimizing or Maximizing a Function                 E04DGF
                E04DGF -- NAG Foundation Library Routine Document
   
        Note: Before using this routine, please read the Users' Note for
        your implementation to check implementation-dependent details.
        The symbol (*) after a NAG routine name denotes a routine that is
        not included in the Foundation Library.
   
        Note for users via the AXIOM system: the interface to this routine
        has been enhanced for use with AXIOM and is slightly different to
        that offered in the standard version of the Foundation Library.  In
        particular, the optional parameters of the NAG routine are now
        included in the parameter list.  These are described in section
        5.1.2, below.
   
        1. Purpose
   
        E04DGF minimizes an unconstrained nonlinear function of several
        variables using a pre-conditioned, limited memory quasi-Newton
        conjugate gradient method. First derivatives are required. The
        routine is intended for use on large scale problems.
   
        2. Specification
   
               SUBROUTINE E04DGF(N,OBJFUN,ITER,OBJF,OBJGRD,X,IWORK,WORK,IUSER,
              1                  USER,ES,FU,IT,LIN,LIST,MA,OP,PR,STA,STO,
              2                  VE,IFAIL)
               INTEGER           N, ITER, IWORK(N+1), IUSER(*),
              1                  IT, PR, STA, STO, VE, IFAIL
               DOUBLE PRECISION OBJF, OBJGRD(N), X(N), WORK(13*N), USER(*)
              1                  ES, FU, LIN, OP, MA
               LOGICAL           LIST
               EXTERNAL          OBJFUN
   
        3. Description
   
        E04DGF uses a pre-conditioned conjugate gradient method and is
        based upon algorithm PLMA as described in Gill and Murray [1] and
        Gill et al [2] Section 4.8.3.
   
        The algorithm proceeds as follows:
   
        Let x  be a given starting point and let k denote the current
             0
        iteration, starting with k=0. The iteration requires g , the
                                                              k
        gradient vector evaluated at x , the kth estimate of the minimum.
                                      k
        At each iteration a vector p  (known as the direction of search)
                                    k
        is computed and the new estimate x    is given by x +(alpha) p
                                          k+1              k        k k
        where (alpha)  (the step length) minimizes the function
                     k
        F(x +(alpha) p ) with respect to the scalar (alpha) . A choice of
           k        k k                                    k
        initial step (alpha)  is taken as
                            0
   
                                                    T
                         (alpha) =min{1,2|F -F   |/g g }
                                0          k  est   k k
   
        where F    is a user-supplied estimate of the function value at
               est
        the solution. If F    is not specified, the software always
                          est
        chooses the unit step length for (alpha) . Subsequent step length
                                                0
        estimates are computed using cubic interpolation with safeguards.
   
        A quasi-Newton method can be used to compute the search direction
        p  by updating the inverse of the approximate Hessian (H ) and
         k                                                      k
        computing
   
                               p   =-H   g                            (1)
                                k+1   k+1 k+1
   
        The updating formula for the approximate inverse is given by
   
                                               (    T    )
                                               (   y H y )
                       1  (     T    T  )   1  (    k k k)   T
             H   =H - ----(H y s +s y H )+ ----(1+ ------)s s         (2)
              k+1  k   T  ( k k k  k k k)   T  (     T   ) k k
                      y s                  y s (    y s  )
                       k k                  k k(     k k )
   
        where y =g   -g  and s =x   -x =(alpha) p .
               k  k-1  k      k  k+1  k        k k
   
        The method used by E04DGF to obtain the search direction is based
        upon computing p    as -H   g    where H    is a matrix obtained
                        k+1      k+1 k+1        k+1
        by updating the identity matrix with a limited number of quasi-
        Newton corrections. The storage of an n by n matrix is avoided by
        storing only the vectors that define the rank two corrections -
        hence the term limited-memory quasi-Newton method. The precise
        method depends upon the number of updating vectors stored. For
        example, the direction obtained with the 'one-step' limited
        memory update is given by (1) using (2) with H  equal to the
                                                      k
        identity matrix, viz.
                                                   T    (    T  )
                                                  s g   (   y y )
                          1  ( T        T      )   k k+1(    k k)
             p   =-g   + ----(s g   y +y g   s )- ------(1+ ----)s
              k+1   k+1   T  ( k k+1 k  k k+1 k)    T   (    T  ) k
                         y s                       y s  (   y s )
                          k k                       k k (    k k)
   
        E04DGF uses a two-step method described in detail in Gill and
        Murray [1] in which restarts and pre-conditioning are
        incorporated. Using a limited-memory quasi-Newton formula, such
        as the one above, guarantees p    to be a descent direction if
                                      k+1
                                T
        all the inner products y  are positive for all vectors y  and s
                                k                               k      k
        used in the updating formula.
   
        The termination criterion of E04DGF is as follows:
   
        Let (tau)  specify a parameter that indicates the number of
                 F
        correct figures desired in F  ((tau)  is equivalent to Optimality
                                    k       F
        Tolerance in the optional parameter list, see Section 5.1). If
        the following three conditions are satisfied
   
             (i) F   -F <(tau) (1+|F |)
                  k-1  k      F     k
   
                                 ______
             (ii) ||x   -x ||<  /(tau)  (1+||x ||)
                     k-1  k   \/      F       k
   
                               ______
             (iii) ||g ||<= 3 /(tau)  (1+|F |) or ||g ||<(epsilon) ,
                      k     \/      F      k         k            A
             where (epsilon)  is the absolute error associated with
                            A
             computing the objective function
   
        then the algorithm is considered to have converged. For a full
        discussion on termination criteria see Gill et al [2] Chapter 8.
   
        4. References
   
        [1]   Gill P E and Murray W (1979) Conjugate-gradient Methods for
              Large-scale Nonlinear Optimization. Technical Report SOL 79-
              15. Department of Operations Research, Stanford University.
   
        [2]   Gill P E, Murray W and Wright M H (1981) Practical
              Optimization. Academic Press.
   
        5. Parameters
   
         1:  N -- INTEGER                                           Input
             On entry: the number n of variables. Constraint: N >= 1.
   
         2:  OBJFUN -- SUBROUTINE, supplied by the user.
                                                       External Procedure
             OBJFUN must calculate the objective function F(x) and its
             gradient for a specified n element vector x.
   
             Its specification is:
   
                    SUBROUTINE OBJFUN (MODE, N, X, OBJF, OBJGRD,
                   1                   NSTATE, IUSER, USER)
                    INTEGER          MODE, N, NSTATE, IUSER(*)
                    DOUBLE PRECISION X(N), OBJF, OBJGRD(N), USER(*)
   
              1:  MODE -- INTEGER                            Input/Output
                  MODE is a flag that the user may set within OBJFUN to
                  indicate a failure in the evaluation of the objective
                  function. On entry: MODE is always non-negative. On
                  exit: if MODE is negative the execution of E04DGF is
                  terminated with IFAIL set to MODE.
   
              2:  N -- INTEGER                                      Input
                  On entry: the number n of variables.
   
              3:  X(N) -- DOUBLE PRECISION array                    Input
                  On entry: the point x at which the objective function
                  is required.
   
              4:  OBJF -- DOUBLE PRECISION                         Output
                  On exit: the value of the objective function F at the
                  current point x.
   
              5:  OBJGRD(N) -- DOUBLE PRECISION array              Output
                                                                ddF
                  On exit: OBJGRD(i) must contain the value of  ---- at
                                                                ddx
                                                                   i
                  the point x, for i=1,2,...,n.
   
              6:  NSTATE -- INTEGER                                 Input
                  On entry: NSTATE will be 1 on the first call of OBJFUN
                  by E04DGF, and is 0 for all subsequent calls. Thus, if
                  the user wishes, NSTATE may be tested within OBJFUN in
                  order to perform certain calculations once only. For
                  example the user may read data or initialise COMMON
                  blocks when NSTATE = 1.
   
              7:  IUSER(*) -- INTEGER array                User Workspace
   
              8:  USER(*) -- DOUBLE PRECISION array        User Workspace
                  OBJFUN is called from E04DGF with the parameters IUSER
                  and USER as supplied to E04DGF. The user is free to use
                  arrays IUSER and USER to supply information to OBJFUN
                  as an alternative to using COMMON.
             OBJFUN must be declared as EXTERNAL in the (sub)program
             from which E04DGF is called. Parameters denoted as
             Input must not be changed by this procedure.
   
         3:  ITER -- INTEGER                                       Output
             On exit: the number of iterations performed.
   
         4:  OBJF -- DOUBLE PRECISION                              Output
             On exit: the value of the objective function F(x) at the
             final iterate.
   
         5:  OBJGRD(N) -- DOUBLE PRECISION array                   Output
             On exit: the objective gradient at the final iterate.
   
         6:  X(N) -- DOUBLE PRECISION array                  Input/Output
             On entry: an initial estimate of the solution. On exit: the
             final estimate of the solution.
   
         7:  IWORK(N+1) -- INTEGER array                        Workspace
   
         8:  WORK(13*N) -- DOUBLE PRECISION array               Workspace
   
         9:  IUSER(*) -- INTEGER array                     User Workspace
             Note: the dimension of the array IUSER must be at least 1.
             This array is not used by E04DGF, but is passed directly to
             routine OBJFUN and may be used to supply information to
             OBJFUN.
   
        10:  USER(*) -- DOUBLE PRECISION array             User Workspace
             Note: the dimension of the array USER must be at least 1.
             This array is not used by E04DGF, but is passed directly to
             routine OBJFUN and may be used to supply information to
             OBJFUN.
   
        11:  IFAIL -- INTEGER                                Input/Output
             On entry: IFAIL must be set to 0, -1 or 1. Users who are
             unfamiliar with this parameter should refer to the Essential
             Introduction for details.
   
             On exit: IFAIL = 0 unless the routine detects an error or
             gives a warning (see Section 6).
   
             For this routine, because the values of output parameters
             may be useful even if IFAIL /=0 on exit, users are
             recommended to set IFAIL to -1 before entry. It is then
             essential to test the value of IFAIL on exit.
   
   
        5.1. Optional Input Parameters
   
        Several optional parameters in E04DGF define choices in the
        behaviour of the routine. In order to reduce the number of formal
        parameters of E04DGF these optional parameters have associated
        default values (see Section 5.1.3) that are appropriate for most
        problems. Therefore the user need only specify those optional
        parameters whose values are to be different from their default
        values.
   
        The remainder of this section can be skipped by users who wish to
        use the default values for all optional parameters. A complete
        list of optional parameters and their default values is given in
        Section 5.1.3.
   
        5.1.1. Specification of the Optional Parameters
   
        Optional parameters may be specified by calling one, or both, of
        E04DJF and E04DKF prior to a call to E04DGF.
   
        E04DJF reads options from an external options file, with Begin
        and End as the first and last lines respectively and each
        intermediate line defining a single optional parameter. For
        example,
   
              Begin
                Print Level = 1
              End
   
        The call
   
              CALL E04DJF(IOPTNS, INFORM)
   
        can then be used to read the file on unit IOPTNS. INFORM will be
        zero on successful exit. E04DJF should be consulted for a full
        description of this method of supplying optional parameters.
   
        E04DKF can be called to supply options directly, one call being
        necessary for each optional parameter.
   
        For example,
   
   
              CALL E04DKF(`Print level = 1')
   
   
        E04DKF should be consulted for a full description of this method
        of supplying optional parameters.
   
        All optional parameters not specified by the user are set to
        their default values. Optional parameters specified by the user
        are unaltered by E04DGF (unless they define invalid values) and
        so remain in effect for subsequent calls to E04DGF, unless
        altered by the user.
   
        5.1.2. Description of the Optional Parameters
   
        The following list (in alphabetical order) gives the valid
        options. For each option, we give the keyword, any essential
        optional qualifiers, the default value, and the definition. The
        minimum valid abbreviation of each keyword is underlined. If no
        characters of an optional qualifier are underlined, the qualifier
        may be omitted. The letter a denotes a phrase (character string)
        that qualifies an option. The letters i and r denote INTEGER and
        real values required with certain options. The number (epsilon)
        is a generic notation for machine precision, and (epsilon)
                                                                  R
        denotes the relative precision of the objective function (the
        optional parameter Function Precision; see below).
   
        Defaults
   
        This special keyword may be used to reset the default values
        following a call to E04DGF.
   
        Estimated Optimal Function Value r
   
           (Axiom parameter ES)
   
        This value of r specifies the user-supplied guess of the optimum
        objective function value. This value is used by E04DGF to
        calculate an initial step length (see Section 3). If the value of
        r is not specified by the user (the default), then this has the
        effect of setting the initial step length to unity. It should be
        noted that for badly scaled functions a unit step along the
        steepest descent direction will often compute the function at
        very large values of x.
   
                                                 0.9
        Function Precision  r Default = (epsilon)
   
           (Axiom parameter FU)
   
        The parameter defines (epsilon) , which is intended to be a
                                       R
        measure of the accuracy with which the problem function F can be
        computed. The value of (epsilon)  should reflect the relative
                                        R
        precision of 1+|F(x)|; i.e. (epsilon)  acts as a relative
                                             R
        precision when |F| is large, and as an absolute precision when
        |F| is small. For example, if F(x) is typically of order 1000 and
        the first six significant digits are known to be correct, an
        appropriate value for (epsilon)  would be 1.0E-6. In contrast, if
                                       R
                                     -4
        F(x) is typically of order 10   and the first six significant
        digits are known to be correct, an appropriate value for
        (epsilon)  would be 1.0E-10. The choice of (epsilon)  can be
                 R                                          R
        quite complicated for badly scaled problems; see Chapter 8 of
        Gill and Murray [2], for a discussion of scaling techniques. The
        default value is appropriate for most simple functions that are
        computed with full accuracy. However when the accuracy of the
        computed function values is known to be significantly worse than
        full precision, the value of (epsilon)  should be large enough so
                                              R
        that E04DGF will not attempt to distinguish between function
        values that differ by less than the error inherent in the
        calculation. If 0<=r<(epsilon), where (epsilon) is the machine
        precision then the default value is used.
   
        Iteration Limit  i Default = max(50,5n)
   
        Iters
   
        Itns
   
           (Axiom parameter IT)
   
        The value i (i>=0) specifies the maximum number of iterations
        allowed before termination. If i<0 the default value is used. See
        Section 8 for further information.
   
        Linesearch Tolerance  r Default = 0.9
   
           (Axiom parameter LIN)
   
        The value r (0<=r<1) controls the accuracy with which the step
        (alpha) taken during each iteration approximates a minimum of the
        function along the search direction (the smaller the value of r,
        the more accurate the linesearch). The default value r=0.9
        requests an inaccurate search, and is appropriate for most
        problems. A more accurate search may be appropriate when it is
        desirable to reduce the number of iterations - for example, if
        the objective function is cheap to evaluate.
   
        List     Default =  List
        Nolist
   
           (Axiom parameter LIST)
   
        Normally each optional parameter specification is printed as it
        is supplied. Nolist may be used to suppress the printing and List
        may be used to restore printing.
                                           10
        Maximum Step Length  r Default = 10
   
           (Axiom parameter MA)
   
        The value r (r>0) defines the maximum allowable step length for
        the line search. If r<=0 the default value is used.
   
                                                   0.8
        Optimality Tolerance  r Default = (epsilon)
   
           (Axiom parameter OP)
                                                   R
   
        The parameter r ((epsilon) <=r<1) specifies the accuracy to which
                                  R
        the user wishes the final iterate to approximate a solution of
        the problem. Broadly speaking, r indicates the number of correct
        figures desired in the objective function at the solution. For
                           - 6
        example, if r is 10    and E04DGF terminates successfully, the
        final value of F should have approximately six correct figures.
        E04DGF will terminate successfully if the iterative sequence of x
        -values is judged to have converged and the final point satisfies
        the termination criteria (see Section 3, where (tau)  represents
                                                            F
        Optimality Tolerance).
   
        Print Level  i Default = 10
   
           (Axiom parameter PR)
   
        The value i controls the amount of printout produced by E04DGF.
        The following levels of printing are available.
   
        i     Output.
   
        0     No output.
   
        1     The final solution.
   
        5     One line of output for each iteration.
   
        10    The final solution and one line of output for each
              iteration.
   
        Start Objective Check at Variable  i Default = 1
   
           (Axiom parameter STA)
   
        Stop Objective Check at Variable  i Default = n
   
           (Axiom parameter STO)
   
        These keywords take effect only if Verify Level > 0 (see below).
        They may be used to control the verification of gradient elements
        computed by subroutine OBJFUN. For example if the first 30
        variables appear linearly in the objective, so that the
        corresponding gradient elements are constant, then it is
        reasonable to specify Start Objective Check at Variable  31.
   
        Verify Level  i Default = 0
   
        Verify                           No
   
        Verify Level                     -1
   
        Verify Level                     0
   
        Verify
   
        Verify                           Yes
   
        Verify Objective Gradients
   
        Verify Gradients
   
        Verify Level                     1
   
           (Axiom parameter VE)
   
        These keywords refer to finite-difference checks on the gradient
        elements computed by the user-provided subroutine OBJFUN. It is
        possible to set Verify Level in several ways, as indicated above.
        For example, the gradients will be verified if Verify, Verify
        Yes, Verify Gradients, Verify Objective Gradients or Verify Level
        = 1 is specified.
   
        If i<0 then no checking will be performed. If i>0 then the
        gradients will be verified at the user-supplied point. If i=0
        only a 'cheap' test will be performed, requiring one call to
        OBJFUN. If i=1, a more reliable (but more expensive) check will
        be made on individual gradient components, within the ranges
        specified by the Start and Stop keywords as described above. A
        result of the form OK or BAD? is printed by E04DGF to indicate
        whether or not each component appears to be correct.
   
        5.1.3. Optional parameter checklist and default values
   
        For easy reference, the following sample list shows all valid
        keywords and their default values. The default options Function
        Precision and Optimality Tolerance depend upon (epsilon), the
        machine precision.
   
        Optional Parameters          Default Values
   
        Estimated Optimal Function
        Value
   
                                              0.9
        Function precision           (epsilon)
   
        Iterations                   max(50,5n)
   
        Linesearch Tolerance         0.9
   
                                       10
        Maximum Step Length          10
   
        List/Nolist                  List
   
                                              0.8
        Optimality Tolerance         (epsilon)
   
        Print Level                  10
   
        Start Objective Check at     1
        Variable
   
        Stop Objective Check at      n
        Variable
   
        Verify Level                 0
   
        5.2. Description of Printed Output
   
        The level of printed output from E04DGF is controlled by the user
        (see the description of Print Level in Section 5.1).
   
        When Print Level >= 5, the following line of output is produced
        at each iteration.
   
        Itn            is the iteration count.
   
        Step           is the step (alpha) taken along the computed
                       search direction. On reasonably well-behaved
                       problems, the unit step will be taken as the
                       solution is approached.
   
        Nfun           is the cumulated number of evaluations of the
                       objective function needed for the linesearch.
                       Evaluations needed for the estimation of the
                       gradients by finite differences are not included.
                       Nfun is printed as a guide to the amount of work
                       required for the linesearch. E04DGF will perform
                       at most 16 function evaluations per iteration.
   
        Objective      is the value of the objective function.
   
        Norm G         is the Euclidean norm of the gradient of the
                       objective function.
   
        Norm X         is the Euclidean norm of x.
   
        Norm (X(k-1)-X(k)) is the Euclidean norm of x   -x .
                                                     k-1  k
   
        When Print Level = 1 or Print Level >= 10 then the solution at
        the end of execution of E04DGF is printed out.
   
        The following describes the printout for each variable:
   
        Variable       gives the name (VARBL) and index j (j = 1 to n) of
                       the variable
   
        Value          is the value of the variable at the final iterate
   
        Gradient Value is the value of the gradient of the objective
                       function with respect to the jth variable at the
                       final iterate
   
        6. Error Indicators and Warnings
   
        Errors or warnings specified by the routine:
   
        If on entry IFAIL = 0 or -1, explanatory error messages are
        output on the current error message unit (as defined by X04AAF).
   
        On exit from E04DGF, IFAIL should be tested. If Print Level > 0
        then a short description of IFAIL is printed.
   
        Errors and diagnostics indicated by IFAIL from E04DGF are as
        follows:
   
        IFAIL< 0
             A negative value of IFAIL indicates an exit from E04DGF
             because the user set MODE negative in routine OBJFUN. The
             value of IFAIL will be the same as the user's setting of
             MODE.
   
        IFAIL= 1
             Not used by this routine.
   
        IFAIL= 2
             Not used by this routine.
   
        IFAIL= 3
             The maximum number of iterations has been performed. If the
             algorithm appears to be making progress the iterations value
             may be too small (see Section 5.1.2) so the user should
             increase iterations and rerun E04DGF. If the algorithm seems
             to be 'bogged down',the user should check for incorrect
             gradients or ill-conditioning as described below under IFAIL
             = 6.
   
        IFAIL= 4
             The computed upper bound on the step length taken during the
             linesearch was too small. A rerun with an increased value of
             the Maximum Step Length ((rho) say) may be successful unless
                      10
             (rho)>=10   (the default value), in which case the current
             point cannot be improved upon.
   
        IFAIL= 5
             Not used by this routine.
   
        IFAIL= 6
             A sufficient decrease in the function value could not be
             attained during the final linesearch. If the subroutine
             OBJFUN computes the function and gradients correctly, then
             this may occur because an overly stringent accuracy has been
             requested, i.e., Optimality Tolerance is too small or if the
             minimum lies close to a step length of zero. In this case
             the user should apply the four tests described in Section 3
             to determine whether or not the final solution is acceptable
             (the user will need to set Print Level >= 5). For a
             discussion of attainable accuracy see Gill and Murray [2].
   
             If many iterations have occurred in which essentially no
             progress has been made or E04DGF has failed to move from the
             initial point, subroutine OBJFUN may be incorrect. The user
             should refer to the comments below under IFAIL = 7 and check
             the gradients using the Verify parameter. Unfortunately,
             there may be small errors in the objective gradients that
             cannot be detected by the verification process. Finite-
             difference approximations to first derivatives are
             catastrophically affected by even small inaccuracies.
   
        IFAIL= 7
             Large errors were found in the derivatives of the objective
             function. This value of IFAIL will occur if the verification
             process indicated that at least one gradient component had
             no correct figures. The user should refer to the printed
             output to determine which elements are suspected to be in
             error.
   
             As a first step, the user should check that the code for the
             objective values is correct - for example, by computing the
             function at a point where the correct value is known.
             However, care should be taken that the chosen point fully
             tests the evaluation of the function. It is remarkable how
             often the values x=0 or x=1 are used to test function
             evaluation procedures, and how often the special properties
             of these numbers make the test meaningless.
   
             Special care should be used in this test if computation of
             the objective function involves subsidiary data communicated
             in COMMON storage. Although the first evaluation of the
             function may be correct, subsequent calculations may be in
             error because some of the subsidiary data has accidentally
             been overwritten.
   
             Errors in programming the function may be quite subtle in
             that the function value is 'almost' correct. For example,
             the function may not be accurate to full precision because
             of the inaccurate calculation of a subsidiary quantity, or
             the limited accuracy of data upon which the function
             depends. A common error on machines where numerical
             calculations are usually performed in double precision is to
             include even one single-precision constant in the
             calculation of the function; since some compilers do not
             convert such constants to double precision, half the correct
             figures may be lost by such a seemingly trivial error.
   
        IFAIL= 8
             The gradient (g) at the starting point is too small. The
                    T
             value g g is less than (epsilon) |F(x )|, where (epsilon)
                                             m    o                   m
             is the machine precision.
   
             The problem should be rerun at a different starting point.
   
        IFAIL= 9
             On entry N < 1.
   
        7. Accuracy
   
        On successful exit the accuracy of the solution will be as
        defined by the optional parameter Optimality Tolerance.
   
        8. Further Comments
   
        Problems whose Hessian matrices at the solution contain sets of
        clustered eigenvalues are likely to be minimized in significantly
        fewer than n iterations. Problems without this property may
        require anything between n and 5n iterations, with approximately
        2n iterations being a common figure for moderately difficult
        problems.
   
        9. Example
   
        To find a minimum of the function
   
                              x
                               1   2   2
                           F=e  (4x +2x +4x x +2x +1).
                                   1   2   1 2   2
   
        The example program is not reproduced here. The source code for
        all example programs is distributed with the NAG Foundation
        Library software and should be available on-line.
\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04djf}{NAG On-line Documentation: e04djf}
\beginscroll
\begin{verbatim}



     E04DJF(3NAG)      Foundation Library (12/10/92)      E04DJF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04DJF
                  E04DJF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          To supply optional parameters to E04DGF from an external file.

          2. Specification

                 SUBROUTINE E04DJF (IOPTNS, INFORM)
                 INTEGER          IOPTNS, INFORM

          3. Description

          E04DJF may be used to supply values for optional parameters to
          E04DGF. E04DJF reads an external file and each line of the file
          defines a single optional parameter. It is only necessary to
          supply values for those parameters whose values are to be
          different from their default values.

          Each optional parameter is defined by a single character string
          of up to 72 characters, consisting of one or more items. The
          items associated with a given option must be separated by spaces,
          or equal signs (=). Alphabetic characters may be upper or lower
          case. The string

                Print level = 1

          is an example of a string used to set an optional parameter. For
          each option the string contains one or more of the following
          items:

          (a)   A mandatory keyword.

          (b)   A phrase that qualifies the keyword.

          (c)   A number that specifies an INTEGER or real value. Such
                numbers may be up to 16 contiguous characters in Fortran
                77's I, F, E or D formats, terminated by a space if this is
                not the last item on the line.

          Blank strings and comments are ignored. A comment begins with an
          asterisk (*) and all subsequent characters in the string are
          regarded as part of the comment.

          The file containing the options must start with begin and must
          finish with end An example of a valid options file is:

                Begin * Example options file
                  Print level = 10
                End


          Normally each line of the file is printed as it is read, on the
          current advisory message unit (see X04ABF), but printing may be
          suppressed using the keyword nolist. To suppress printing of
          begin, nolist must be the first option supplied as in the file:

                Begin
                  Nolist
                  Print level = 10
                End

          Printing will automatically be turned on again after a call to
          E04DGF and may be turned on again at any time by the user by
          using the keyword list.

          Optional parameter settings are preserved following a call to
          E04DGF, and so the keyword defaults is provided to allow the user
          to reset all the optional parameters to their default values
          prior to a subsequent call to E04DGF.

          A complete list of optional parameters, their abbreviations,
          synonyms and default values is given in Section 5.1 of the
          routine document for E04DGF.

          4. References

          None.

          5. Parameters

           1:  IOPTNS -- INTEGER                                      Input
               On entry: IOPTNS must be the unit number of the options
               file. Constraint: 0 <= IOPTNS <= 99.

           2:  INFORM -- INTEGER                                     Output
               On exit: INFORM will be zero if an options file with the
               correct structure has been read. Otherwise INFORM will be
               positive. Positive values of INFORM indicate that an options
               file may not have been successfully read as follows:
               INFORM = 1
                     IOPTNS is not in the range [0,99].

               INFORM = 2
                     begin was found, but end-of-file was found before end
                     was found.

               INFORM = 3
                     end-of-file was found before begin was found.

          6. Error Indicators and Warnings

          If a line is not recognised as a valid option, then a warning
          message is output on the current advisory message unit (see
          X04ABF).

          7. Accuracy

          Not applicable.

          8. Further Comments

          E04DKF may also be used to supply optional parameters to E04DGF.

          9. Example

          See the example for E04DGF.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04dkf}{NAG On-line Documentation: e04dkf}
\beginscroll
\begin{verbatim}



     E04DKF(3NAG)      Foundation Library (12/10/92)      E04DKF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04DKF
                  E04DKF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          To supply individual optional parameters to E04DGF.

          2. Specification

                 SUBROUTINE E04DKF (STRING)
                 CHARACTER*(*)    STRING

          3. Description

          E04DKF may be used to supply values for optional parameters to
          E04DGF. It is only necessary to call E04DKF for those parameters
          whose values are to be different from their default values. One
          call to E04DKF sets one parameter value.

          Each optional parameter is defined by a single character string
          of up to 72 characters, consisting of one or more items. The
          items associated with a given option must be separated by spaces,
          or equal signs (=). Alphabetic characters may be upper or lower
          case. The string

                Print Level = 1

          is an example of a string used to set an optional parameter. For
          each option the string contains one or more of the following
          items:

          (a)   A mandatory keyword.

          (b)   A phrase that qualifies the keyword.

          (c)   A number that specifies an INTEGER or real value. Such
                numbers may be up to 16 contiguous characters in Fortran
                77's I, F, E or D formats, terminated by a space if this is
                not the last item on the line.

          Blank strings and comments are ignored. A comment begins with an
          asterisk (*) and all subsequent characters in the string are
          regarded as part of the comment.

          Normally, each user-specified option is printed as it is defined,
          on the current advisory message unit (see X04ABF), but this
          printing may be suppressed using the keyword nolist Thus the
          statement

               CALL E04DKF (`Nolist')

          suppresses printing of this and subsequent options. Printing will
          automatically be turned on again after a call to E04DGF, and may
          be turned on again at any time by the user, by using the keyword
          list.

          Optional parameter settings are preserved following a call to
          E04DGF, and so the keyword defaults is provided to allow the user
          to reset all the optional parameters to their default values by
          the statement,

               CALL E04DKF (`Defaults')

          prior to a subsequent call to E04DGF.

          A complete list of optional parameters, their abbreviations,
          synonyms and default values is given in Section 5.1 of the
          routine document for E04DGF.

          4. References

          None.

          5. Parameters

           1:  STRING -- CHARACTER*(*)                                Input
               On entry: STRING must be a single valid option string. See
               Section 3 above, and Section 5.1 of the routine document for
               E04DGF.

          6. Error Indicators and Warnings

          If the parameter STRING is not recognised as a valid option
          string, then a warning message is output on the current advisory
          message unit (see X04ABF).

          7. Accuracy

          Not applicable.

          8. Further Comments

          E04DJF may also be used to supply optional parameters to E04DGF.

          9. Example

          See the example for E04DGF.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04fdf}{NAG On-line Documentation: e04fdf}
\beginscroll
\begin{verbatim}



     E04FDF(3NAG)      Foundation Library (12/10/92)      E04FDF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04FDF
                  E04FDF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04FDF is an easy-to-use algorithm for finding an unconstrained
          minimum of a sum of squares of m nonlinear functions in n
          variables (m>=n). No derivatives are required.

          It is intended for functions which are continuous and which have
          continuous first and second derivatives (although it will usually
          work even if the derivatives have occasional discontinuities).

          2. Specification

                 SUBROUTINE E04FDF (M, N, X, FSUMSQ, IW, LIW, W, LW, IFAIL)
                 INTEGER          M, N, IW(LIW), LIW, LW, IFAIL
                 DOUBLE PRECISION X(N), FSUMSQ, W(LW)

          3. Description

          This routine is essentially identical to the subroutine LSNDN1 in
          the National Physical Laboratory Algorithms Library. It is
          applicable to problems of the form

                                            m
                                            --        2
                             Minimize F(x)= >  [f (x)]
                                            --   i
                                            i=1

                                T
          where x=(x ,x ,...,x )  and m>=n. (The functions f (x) are often
                    1  2      n                             i
          referred to as 'residuals'.) The user must supply a subroutine
          LSFUN1 to evaluate functions f (x) at any point x.
                                        i

          From a starting point supplied by the user, a sequence of points
          is generated which is intended to converge to a local minimum of
          the sum of squares. These points are generated using estimates of
          the curvature of F(x).

          4. References

          [1]   Gill P E and Murray W (1978) Algorithms for the Solution of
                the Nonlinear Least-squares Problem. SIAM J. Numer. Anal. 15
                977--992.

          5. Parameters

           1:  M -- INTEGER                                           Input

           2:  N -- INTEGER                                           Input
               On entry: the number m of residuals f (x), and the number n
                                                    i
               of variables, x . Constraint: 1 <= N <= M.
                              j

           3:  X(N) -- DOUBLE PRECISION array                  Input/Output
               On entry: X(j) must be set to a guess at the jth component
               of the position of the minimum, for j=1,2,...,n. On exit:
               the lowest point found during the calculations. Thus, if
               IFAIL = 0 on exit, X(j) is the jth component of the position
               of the minimum.

           4:  FSUMSQ -- DOUBLE PRECISION                            Output
               On exit: the value of the sum of squares, F(x),
               corresponding to the final point stored in X.

           5:  IW(LIW) -- INTEGER array                           Workspace

           6:  LIW -- INTEGER                                         Input
               On entry: the length of IW as declared in the (sub)program
               from which E04FDF has been called. Constraint: LIW >= 1.

           7:  W(LW) -- DOUBLE PRECISION array                    Workspace

           8:  LW -- INTEGER                                          Input
               On entry: the length of W as declared in the (sub)program
               from which E04FDF is called. Constraints:
                    LW >= N*(7 + N + 2*M + (N-1)/2) + 3*M, if N > 1,

                    LW >= 9 + 5*>M, if N = 1.

           9:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit.

          5.1. Optional Parameters

               LSFUN1 -- SUBROUTINE, supplied by the user.
                                                         External Procedure
               This routine must be supplied by the user to calculate the
               vector of values f (x) at any point x. Since the routine is
                                 i
               not a parameter to E04FDF, it must be called LSFUN1. It
               should be tested separately before being used in conjunction
               with E04FDF (see the Chapter Introduction).

               Its specification is:

                      SUBROUTINE LSFUN1 (M, N, XC, FVECC)
                      INTEGER          M, N
                      DOUBLE PRECISION XC(N), FVECC(M)

                1:  M -- INTEGER                                      Input

                2:  N -- INTEGER                                      Input
                    On entry: the numbers m and n of residuals and
                    variables, respectively.

                3:  XC(N) -- DOUBLE PRECISION array                   Input
                    On entry: the point x at which the values of the f
                                                                       i
                    are required.

                4:  FVECC(M) -- DOUBLE PRECISION array               Output
                    On exit: FVECC(i) must contain the value of f  at the
                                                                 i
                    point x, for i=1,2,...,m.
               LSFUN1 must be declared as EXTERNAL in the (sub)program
               from which E04FDF is called. Parameters denoted as
               Input must not be changed by this procedure.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 1,

               or       M < N,

               or       LIW < 1,

               or       LW < N*(7 + N + 2*M + (N-1)/2) + 3*M, when N > 1,

               or       LW < 9 + 5*>M, when N = 1.

          IFAIL= 2
               There have been 400*n calls of LSFUN1, yet the algorithm
               does not seem to have converged. This may be due to an
               awkward function or to a poor starting point, so it is worth
               restarting E04FDF from the final point held in X.

          IFAIL= 3
               The final point does not satisfy the conditions for
               acceptance as a minimum, but no lower point could be found.

          IFAIL= 4
               An auxiliary routine has been unable to complete a singular
               value decomposition in a reasonable number of sub-
               iterations.

          IFAIL= 5

          IFAIL= 6

          IFAIL= 7

          IFAIL= 8
               There is some doubt about whether the point x found by
               E04FDF is a minimum of F(x). The degree of confidence in the
               result decreases as IFAIL increases. Thus when IFAIL = 5, it
               is probable that the final x gives a good estimate of the
               position of a minimum, but when IFAIL = 8 it is very
               unlikely that the routine has found a minimum.

          If the user is not satisfied with the result (e.g. because IFAIL
          lies between 3 and 8), it is worth restarting the calculations
          from a different starting point (not the point at which the
          failure occurred) in order to avoid the region which caused the
          failure. Repeated failure may indicate some defect in the
          formulation of the problem.

          7. Accuracy

          If the problem is reasonably well scaled and a successful exit is
          made, then, for a computer with a mantissa of t decimals, one
          would expect to get about t/2-1 decimals accuracy in the
          components of x and between t-1 (if F(x) is of order 1 at the
          minimum) and 2t-2 (if F(x) is close to zero at the minimum)
          decimals accuracy in F(x).

          8. Further Comments

          The number of iterations required depends on the number of
          variables, the number of residuals and their behaviour, and the
          distance of the starting point from the solution. The number of
          multiplications performed per iteration of E04FDF varies, but for
                                   2    3
          m>>n is approximately n*m +O(n ). In addition, each iteration
          makes at least n+1 calls of LSFUN1. So, unless the residuals can
          be evaluated very quickly, the run time will be dominated by the
          time spent in LSFUN1.

          Ideally, the problem should be scaled so that the minimum value
          of the sum of squares is in the range (0,1), and so that at
          points a unit distance away from the solution the sum of squares
          is approximately a unit value greater than at the minimum. It is
          unlikely that the user will be able to follow these
          recommendations very closely, but it is worth trying (by
          guesswork), as sensible scaling will reduce the difficulty of the
          minimization problem, so that E04FDF will take less computer
          time.

          When the sum of squares represents the goodness of fit of a
          nonlinear model to observed data, elements of the variance-
          covariance matrix of the estimated regression coefficients can be
          computed by a subsequent call to E04YCF, using information
          returned in segments of the workspace array W. See E04YCF for
          further details.

          9. Example

          To find least-squares estimates of x , x  and x  in the model
                                              1   2      3

                                            t
                                             1
                                   y=x + ---------
                                      1  x t +x t
                                          2 2  3 3

          using the 15 sets of data given in the following table.

                                  y    t   t    t
                                        1   2    3
                                 0.14  1.0 15.0 1.0
                                 0.18  2.0 14.0 2.0
                                 0.22  3.0 13.0 3.0
                                 0.25  4.0 12.0 4.0
                                 0.29  5.0 11.0 5.0
                                 0.32  6.0 10.0 6.0
                                 0.35  7.0  9.0 7.0
                                 0.39  8.0  8.0 8.0
                                 0.37  9.0  7.0 7.0
                                 0.58 10.0  6.0 6.0
                                 0.73 11.0  5.0 5.0
                                 0.96 12.0  4.0 4.0
                                 1.34 13.0  3.0 3.0
                                 2.10 14.0  2.0 2.0
                                 4.39 15.0  1.0 1.0

          The program uses (0.5, 1.0, 1.5) as the initial guess at the
          position of the minimum.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04gcf}{NAG On-line Documentation: e04gcf}
\beginscroll
\begin{verbatim}



     E04GCF(3NAG)      Foundation Library (12/10/92)      E04GCF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04GCF
                  E04GCF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04GCF is an easy-to-use quasi-Newton algorithm for finding an
          unconstrained minimum of a sum of squares of m nonlinear
          functions in n variables (m>=n). First derivatives are required.

          It is intended for functions which are continuous and which have
          continuous first and second derivatives (although it will usually
          work even if the derivatives have occasional discontinuities).

          2. Specification

                 SUBROUTINE E04GCF (M, N, X, FSUMSQ, IW, LIW, W, LW, IFAIL)
                 INTEGER          M, N, IW(LIW), LIW, LW, IFAIL
                 DOUBLE PRECISION X(N), FSUMSQ, W(LW)

          3. Description

          This routine is essentially identical to the subroutine LSFDQ2 in
          the National Physical Laboratory Algorithms Library. It is
          applicable to problems of the form

                                             m
                                             --        2
                             Minimize  F(x)= >  [f (x)]
                                             --   i
                                             i=1

                                T
          where x=(x ,x ,...,x )  and m>=n. (The functions f (x) are often
                    1  2      n                             i
          referred to as 'residuals'.) The user must supply a subroutine
          LSFUN2 to evaluate the residuals and their first derivatives at
          any point x.

          Before attempting to minimize the sum of squares, the algorithm
          checks LSFUN2 for consistency. Then, from a starting point
          supplied by the user, a sequence of points is generated which is
          intended to converge to a local minimum of the sum of squares.
          These points are generated using estimates of the curvature of
          F(x).

          4. References

          [1]   Gill P E and Murray W (1978) Algorithms for the Solution of
                the Nonlinear Least-squares Problem. SIAM J. Numer. Anal. 15
                977--992.

          5. Parameters

           1:  M -- INTEGER                                           Input

           2:  N -- INTEGER                                           Input
               On entry: the number m of residuals f (x), and the number n
                                                    i
               of variables, x . Constraint: 1 <= N <= M.
                              j

           3:  X(N) -- DOUBLE PRECISION array                  Input/Output
               On entry: X(j) must be set to a guess at the jth component
               of the position of the minimum, for j=1,2,...,n. The routine
               checks the first derivatives calculated by LSFUN2 at the
               starting point, and so is more likely to detect an error in
               the user's routine if the initial X(j) are non-zero and
               mutually distinct. On exit: the lowest point found during
               the calculations. Thus, if IFAIL = 0 on exit, X(j) is the j
               th component of the position of the minimum.

           4:  FSUMSQ -- DOUBLE PRECISION                            Output
               On exit: the value of the sum of squares, F(x),
               corresponding to the final point stored in X.

           5:  IW(LIW) -- INTEGER array                           Workspace

           6:  LIW -- INTEGER                                         Input
               On entry: the length of IW as declared in the (sub)program
               from which E04GCF is called. Constraint: LIW >= 1.

           7:  W(LW) -- DOUBLE PRECISION array                    Workspace

           8:  LW -- INTEGER                                          Input
               On entry: the length of W as declared in the (sub)program
               from which E04GCF is called. Constraints:
                    LW >= 2*N*(4 + N + M) + 3*M, if N > 1,

                    LW >= 11 + 5*M, if N = 1.

           9:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit.

          5.1. Optional Parameters

               LSFUN2 -- SUBROUTINE, supplied by the user.
                                                         External Procedure
               This routine must be supplied by the user to calculate the
               vector of values f (x) and the Jacobian matrix of first
                                 i
                            ddf
                               i
               derivatives  ---- at any point x. Since the routine is not a
                            ddx
                               j
               parameter to E04GCF, it must be called LSFUN2. It should be
               tested separately before being used in conjunction with
               E04GCF (see the Chapter Introduction).

               Its specification is:

                      SUBROUTINE LSFUN2 (M, N, XC, FVECC, FJACC, LJC)
                      INTEGER          M, N, LJC
                      DOUBLE PRECISION XC(N), FVECC(M), FJACC(LJC,N)
               Important: The dimension declaration for FJACC must
               contain the variable LJC, not an integer constant.

                1:  M -- INTEGER                                      Input

                2:  N -- INTEGER                                      Input
                    On entry: the numbers m and n of residuals and
                    variables, respectively.

                3:  XC(N) -- DOUBLE PRECISION array                   Input
                    On entry: the point x at which the values of the f
                                                                       i
                             ddf
                                i
                    and the  ---- are required.
                             ddx
                                j

                4:  FVECC(M) -- DOUBLE PRECISION array               Output
                    On exit: FVECC(i) must contain the value of f  at the
                                                                  i
                    point x, for i=1,2,...,m.

                5:  FJACC(LJC,N) -- DOUBLE PRECISION array           Output
                                                                    ddf
                                                                       i
                    On exit: FJACC(i,j) must contain the value of  ---- at
                                                                    ddx
                                                                       j
                    the point x, for i=1,2,...,m; j=1,2,...,n.

                6:  LJC -- INTEGER                                    Input
                    On entry: the first dimension of the array FJACC.
               LSFUN2 must be declared as EXTERNAL in the (sub)program
               from which E04GCF is called. Parameters denoted as
               Input must not be changed by this procedure.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          If on entry IFAIL = 0 or -1, explanatory error messages are
          output on the current error message unit (as defined by X04AAF).

          IFAIL= 1
               On entry N < 1,

               or       M < N,

               or       LIW < 1,

               or       LW < 2*N*(4 + N + M) + 3*M, when N > 1,

               or       LW < 9 + 5*>M, when N = 1.

          IFAIL= 2
               There have been 50*n calls of LSFUN2, yet the algorithm does
               not seem to have converged. This may be due to an awkward
               function or to a poor starting point, so it is worth
               restarting E04GCF from the final point held in X.

          IFAIL= 3
               The final point does not satisfy the conditions for
               acceptance as a minimum, but no lower point could be found.

          IFAIL= 4
               An auxiliary routine has been unable to complete a singular
               value decomposition in a reasonable number of sub-
               iterations.

          IFAIL= 5

          IFAIL= 6

          IFAIL= 7

          IFAIL= 8
               There is some doubt about whether the point X found by
               E04GCF is a minimum of F(x). The degree of confidence in the
               result decreases as IFAIL increases. Thus, when IFAIL = 5,
               it is probable that the final x gives a good estimate of the
               position of a minimum, but when IFAIL = 8 it is very
               unlikely that the routine has found a minimum.

          IFAIL= 9
               It is very likely that the user has made an error in forming
                                ddf
                                   i
               the derivatives  ---- in LSFUN2.
                                ddx
                                   j

          If the user is not satisfied with the result (e.g. because IFAIL
          lies between 3 and 8), it is worth restarting the calculations
          from a different starting point (not the point at which the
          failure occurred) in order to avoid the region which caused the
          failure. Repeated failure may indicate some defect in the
          formulation of the problem.

          7. Accuracy

          If the problem is reasonably well scaled and a successful exit is
          made then, for a computer with a mantissa of t decimals, one
          would expect to get t/2-1 decimals accuracy in the components of
          x and between t-1 (if F(x) is of order 1 at the minimum) and 2t-2
          (if F(x) is close to zero at the minimum) decimals accuracy in
          F(x).

          8. Further Comments

          The number of iterations required depends on the number of
          variables, the number of residuals and their behaviour, and the
          distance of the starting point from the solution. The number of
          multiplications performed per iteration of E04GCF varies, but for
                                   2    3
          m>>n is approximately n*m +O(n ). In addition, each iteration
          makes at least one call of LSFUN2. So, unless the residuals and
          their derivatives can be evaluated very quickly, the run time
          will be dominated by the time spent in LSFUN2.

          Ideally the problem should be scaled so that the minimum value of
          the sum of squares is in the range (0,1) and so that at points a
          unit distance away from the solution the sum of squares is
          approximately a unit value greater than at the minimum. It is
          unlikely that the user will be able to follow these
          recommendations very closely, but it is worth trying (by
          guesswork), as sensible scaling will reduce the difficulty of the
          minimization problem, so that E04GCF will take less computer
          time.

          When the sum of squares represents the goodness of fit of a
          nonlinear model to observed data, elements of the variance-
          covariance matrix of the estimated regression coefficients can be
          computed by a subsequent call to E04YCF, using information
          returned in segments of the workspace array W. See E04YCF for
          further details.

          9. Example

          To find the least-squares estimates of x , x  and x  in the model
                                                  1   2      3

                                            t
                                             1
                                   y=x + ---------
                                      1  x t +x t
                                          2 2  3 3

          using the 15 sets of data given in the following table.

                                  y    t   t    t
                                        1   2    3
                                 0.14  1.0 15.0 1.0
                                 0.18  2.0 14.0 2.0
                                 0.22  3.0 13.0 3.0
                                 0.25  4.0 12.0 4.0
                                 0.29  5.0 11.0 5.0
                                 0.32  6.0 10.0 6.0
                                 0.35  7.0  9.0 7.0
                                 0.39  8.0  8.0 8.0
                                 0.37  9.0  7.0 7.0
                                 0.58 10.0  6.0 6.0
                                 0.73 11.0  5.0 5.0
                                 0.96 12.0  4.0 4.0
                                 1.34 13.0  3.0 3.0
                                 2.10 14.0  2.0 2.0
                                 4.39 15.0  1.0 1.0

          The program uses (0.5, 1.0, 1.5) as the initial guess at the
          position of the minimum.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04jaf}{NAG On-line Documentation: e04jaf}
\beginscroll
\begin{verbatim}



     E04JAF(3NAG)      Foundation Library (12/10/92)      E04JAF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04JAF
                  E04JAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04JAF is an easy-to-use quasi-Newton algorithm for finding a
          minimum of a function F(x ,x ,...,x ), subject to fixed upper and
                                   1  2      n
          lower bounds of the independent variables x ,x ,...,x , using
                                                     1  2      n
          function values only.

          It is intended for functions which are continuous and which have
          continuous first and second derivatives (although it will usually
          work even if the derivatives have occasional discontinuities).

          2. Specification

                 SUBROUTINE E04JAF (N, IBOUND, BL, BU, X, F, IW, LIW, W,
                1                   LW, IFAIL)
                 INTEGER          N, IBOUND, IW(LIW), LIW, LW, IFAIL
                 DOUBLE PRECISION BL(N), BU(N), X(N), F, W(LW)

          3. Description

          This routine is applicable to problems of the form:

            Minimize  F(x ,x ,...,x ) subject to l <=x <=u  , j=1,2,...,n
                         1  2      n              j   j   j

          when derivatives of F(x) are unavailable.

          Special provision is made for problems which actually have no
          bounds on the x , problems which have only non-negativity bounds
                         j
          and problems in which l =l =...=l  and u =u =...=u . The user
                                 1  2      n      1  2      n
          must supply a subroutine FUNCT1 to calculate the value of F(x) at
          any point x.

          From a starting point supplied by the user there is generated, on
          the basis of estimates of the gradient and the curvature of F(x),
          a sequence of feasible points which is intended to converge to a
          local minimum of the constrained function. An attempt is made to
          verify that the final point is a minimum.

          4. References

          [1]   Gill P E and Murray W (1976) Minimization subject to bounds
                on the variables. Report NAC 72. National Physical
                Laboratory.

          5. Parameters

           1:  N -- INTEGER                                           Input
               On entry: the number n of independent variables.
               Constraint: N >= 1.

           2:  IBOUND -- INTEGER                                      Input
               On entry: indicates whether the facility for dealing with
               bounds of special forms is to be used.

               It must be set to one of the following values:
               IBOUND = 0
                     if the user will be supplying all the l  and u
                                                            j      j
                     individually.

               IBOUND = 1
                     if there are no bounds on any x .
                                                    j

               IBOUND = 2
                     if all the bounds are of the form 0<=x .
                                                           j

               IBOUND = 3
                     if l =l =...=l  and u =u =...=u .
                         1  2      n      1  2      n

           3:  BL(N) -- DOUBLE PRECISION array                 Input/Output
               On entry: the lower bounds l .
                                           j

               If IBOUND is set to 0, the user must set BL(j) to l , for
                                                                  j
               j=1,2,...,n. (If a lower bound is not specified for a
                                                                          6
               particular x , the corresponding BL(j) should be set to -10.)
                           j

               If IBOUND is set to 3, the user must set BL(1) to l ; E04JAF
                                                                  1
               will then set the remaining elements of BL equal to BL(1).
               On exit: the lower bounds actually used by E04JAF.

           4:  BU(N) -- DOUBLE PRECISION array                 Input/Output
               On entry: the upper bounds u .
                                           j

               If IBOUND is set to 0, the user must set BU(j) to u , for
                                                                  j
               j=1,2,...,n. (If an upper bound is not specified for a
                                                                         6
               particular x , the corresponding BU(j) should be set to 10.)
                           j

               If IBOUND is set to 3, the user must set BU(1) to u ; E04JAF
                                                                  1
               will then set the remaining elements of BU equal to BU(1).
               On exit: the upper bounds actually used by E04JAF.

           5:  X(N) -- DOUBLE PRECISION array                  Input/Output
               On entry: X(j) must be set to an estimate of the jth
               component of the position of the minimum, for j=1,2,...,n.
               On exit: the lowest point found during the calculations.
               Thus, if IFAIL = 0 on exit, X(j) is the jth component of the
               position of the minimum.

           6:  F -- DOUBLE PRECISION                                 Output
               On exit: the value of F(x) corresponding to the final point
               stored in X.

           7:  IW(LIW) -- INTEGER array                           Workspace

           8:  LIW -- INTEGER                                         Input
               On entry: the length of IW as declared in the (sub)program
               from which E04JAF is called. Constraint: LIW >= N + 2.

           9:  W(LW) -- DOUBLE PRECISION array                    Workspace

          10:  LW -- INTEGER                                          Input
               On entry: the length of W as declared in the (sub)program
               from which E04JAF is called. Constraint: LW>=max(N*(N-
               1)/2+12*N,13).

          11:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit. To suppress
               the output of an error message when soft failure occurs, set
               IFAIL to 1.

          5.1. Optional Parameters

               FUNCT1 -- SUBROUTINE, supplied by the user.
                                                    External Procedure
               This routine must be supplied by the user to calculate the
               value of the function F(x) at any point x. Since this
               routine is not a parameter to E04JAF, it must be called
               FUNCT1. It should be tested separately before being used in
               conjunction with E04JAF (see the Chapter Introduction).

               Its specification is:

                      SUBROUTINE FUNCT1 (N, XC, FC)
                      INTEGER          N
                      DOUBLE PRECISION XC(N), FC

                1:  N -- INTEGER                                      Input
                    On entry: the number n of variables.

                2:  XC(N) -- DOUBLE PRECISION array                   Input
                    On entry: the point x at which the function value is
                    required.

                3:  FC -- DOUBLE PRECISION                           Output
                    On exit: the value of the function F at the current
                    point x.
               FUNCT1 must be declared as EXTERNAL in the (sub)program
               from which E04JAF is called. Parameters denoted as
               Input must not be changed by this procedure.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          IFAIL= 1
               On entry N < 1,

               or       IBOUND < 0,

               or       IBOUND > 3,

               or       IBOUND = 0 and BL(j) > BU(j) for some j,

               or       IBOUND = 3 and BL(1) > BU(1),

               or       LIW < N + 2,

               or       LW<max(13,12*N+N*(N-1)/2).

          IFAIL= 2
               There have been 400*n function evaluations, yet the
               algorithm does not seem to be converging. The calculations
               can be restarted from the final point held in X. The error
               may also indicate that F(x) has no minimum.

          IFAIL= 3
               The conditions for a minimum have not all been met but a
               lower point could not be found and the algorithm has failed.

          IFAIL= 4
               An overflow has occurred during the computation. This is an
               unlikely failure, but if it occurs the user should restart
               at the latest point given in X.

          IFAIL= 5

          IFAIL= 6

          IFAIL= 7

          IFAIL= 8
               There is some doubt about whether the point x found by
               E04JAF is a minimum. The degree of confidence in the result
               decreases as IFAIL increases. Thus, when IFAIL = 5 it is
               probable that the final x gives a good estimate of the
               position of a minimum, but when IFAIL = 8 it is very
               unlikely that the routine has found a minimum.

          IFAIL= 9
               In the search for a minimum, the modulus of one of the
                                                    6
               variables has become very large  (~10 ).  This indicates
               that there is a mistake in FUNCT1, that the user's problem
               has no finite solution, or that the problem needs rescaling
               (see Section 8).

          If the user is dissatisfied with the result (e.g. because IFAIL =
          5, 6, 7 or 8), it is worth restarting the calculations from a
          different starting point (not the point at which the failure
          occurred) in order to avoid the region which caused the failure.
          If persistent trouble occurs and the gradient can be calculated,
          it may be advisable to change to a routine which uses gradients
          (see the Chapter Introduction).

          7. Accuracy

          When a successful exit is made then, for a computer with a
          mantissa of t decimals, one would expect to get about t/2-1
          decimals accuracy in x and about t-1 decimals accuracy in F,
          provided the problem is reasonably well scaled.

          8. Further Comments

          The number of iterations required depends on the number of
          variables, the behaviour of F(x) and the distance of the starting
          point from the solution. The number of operations performed in an
                                                          2
          iteration of E04JAF is roughly proportional to n . In addition,
          each iteration makes at least m+1 calls of FUNCT1, where m is the
          number of variables not fixed on bounds. So, unless F(x) can be
          evaluated very quickly, the run time will be dominated by the
          time spent in FUNCT1.

          Ideally the problem should be scaled so that at the solution the
          value of F(x) and the corresponding values of x ,x ,...,x  are
                                                         1  2      n
          each in the range (-1,+1), and so that at points a unit distance
          away from the solution, F is approximately a unit value greater
          than at the minimum. It is unlikely that the user will be able to
          follow these recommendations very closely, but it is worth trying
          (by guesswork), as sensible scaling will reduce the difficulty of
          the minimization problem, so that E04JAF will take less computer
          time.

          9. Example

          To minimize

                                2         2         4          4
                     F=(x +10x ) +5(x -x ) +(x -2x ) +10(x -x )
                         1    2      3  4     2   3       1  4

          subject to

                                      1<=x <=3
                                          1

                                      -2<=x <=0
                                           2

                                      1<=x <=3,
                                          4

          starting from the initial guess (3, - 1, 0, 1).

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04mbf}{NAG On-line Documentation: e04mbf}
\beginscroll
\begin{verbatim}



     E04MBF(3NAG)      Foundation Library (12/10/92)      E04MBF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04MBF
                  E04MBF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04MBF is an easy-to-use routine for solving linear programming
          problems, or for finding a feasible point for such problems. It
          is not intended for large sparse problems.

          2. Specification

                 SUBROUTINE E04MBF (ITMAX, MSGLVL, N, NCLIN, NCTOTL, NROWA,
                1                   A, BL, BU, CVEC, LINOBJ, X, ISTATE,
                2                   OBJLP, CLAMDA, IWORK, LIWORK, WORK,
                3                   LWORK, IFAIL)
                 INTEGER          ITMAX, MSGLVL, N, NCLIN, NCTOTL, NROWA,
                1                 ISTATE(NCTOTL), IWORK(LIWORK), LIWORK,
                2                 LWORK, IFAIL
                 DOUBLE PRECISION A(NROWA,N), BL(NCTOTL), BU(NCTOTL), CVEC
                1                 (N), X(N), OBJLP, CLAMDA(NCTOTL), WORK
                2                 (LWORK)
                 LOGICAL          LINOBJ

          3. Description

          E04MBF solves linear programming (LP) problems of the form

                                T                   (x )
                    Minimize   c x   subject to  l<=(Ax)<=u            (LP)
                             n
                    x is in R

          where c is an n element vector and A is an m by n matrix i.e.,
          there are n variables and m general linear constraints. m may be
          zero in which case the LP problem is subject only to bounds on
          the variables. Notice that upper and lower bounds are specified
          for all the variables and constraints. This form allows full
          generality in specifying other types of constraints. For example
          the ith constraint may be specified as equality by setting l =u .
                                                                      i  i
          If certain bounds are not present the associated elements of l or
          u can be set to special values that will be treated as -infty or
          +infty.

          The routine allows the linear objective function to be omitted in
          which case a feasible point for the set of constraints is sought.

          The user must supply an initial estimate of the solution.

          Users who wish to exercise additional control and users with
          problems whose solution would benefit from additional flexibility
          should consider using the comprehensive routine E04NAF.

          4. References

          [1]   Gill P E, Murray W and Wright M H (1981) Practical
                Optimization. Academic Press.

          [2]   Gill P E, Murray W, Saunders M A and Wright M H (1983)
                User's Guide for SOL/QPSOL. Report SOL 83-7. Department of
                Operations Research, Stanford University.

          5. Parameters

           1:  ITMAX -- INTEGER                                       Input
               On entry: an upper bound on the number of iterations to be
               taken. If ITMAX is not positive, then the value 50 is used
               in place of ITMAX.

           2:  MSGLVL -- INTEGER                                      Input
               On entry: indicates whether or not printout is required at
               the final solution. When printing occurs the output is on
               the advisory message channel (see X04ABF). A description of
               the printed output is given in Section 5.1. The level of
               printing is determined as follows:
               MSGLVL < 0
                     No printing.

               MSGLVL = 0
                     Printing only if an input parameter is incorrect, or
                     if the problem is so ill-conditioned that subsequent
                     overflow is likely. This setting is strongly
                     recommended in preference to MSGLVL < 0.

               MSGLVL = 1
                     Printing at the solution.

               MSGLVL > 1
                     Values greater than 1 should normally be used only at
                     the direction of NAG; such values may generate large
                     amounts of printed output.

           3:  N -- INTEGER                                           Input
               On entry: the number n of variables. Constraint: N >= 1.

           4:  NCLIN -- INTEGER                                       Input
               On entry: the number of general linear constraints in the
               problem. Constraint: NCLIN >= 0.

           5:  NCTOTL -- INTEGER                                      Input
               On entry: the value (N+NCLIN).

           6:  NROWA -- INTEGER                                       Input
               On entry:
               the first dimension of the array A as declared in the
               (sub)program from which E04MBF is called.
               Constraint: NROWA >= max(1,NCLIN).

           7:  A(NROWA,N) -- DOUBLE PRECISION array                   Input
               On entry: the leading NCLIN by n part of A must contain the
               NCLIN general constraints, with the coefficients of the ith
               constraint in the ith row of A. If NCLIN = 0, then A is not
               referenced.

           8:  BL(NCTOTL) -- DOUBLE PRECISION array                   Input
               On entry: the first n elements of BL must contain the lower
               bounds on the n variables, and when NCLIN > 0, the next
               NCLIN elements of BL must contain the lower bounds on the
               NCLIN general linear constraints. To specify a non-existent
               lower bound (l =-infty), set BL(j)<=-1.0E+20.
                             j

           9:  BU(NCTOTL) -- DOUBLE PRECISION array                   Input
               On entry: the first n elements of BU must contain the upper
               bounds on the n variables, and when NCLIN > 0, the next
               NCLIN elements of BU must contain the upper bounds on the
               NCLIN general linear constraints. To specify a non-existent
               upper bound (u =+infty), set BU(j)>=1.0E+20. Constraint:
                             j
               BL(j)<=BU(j), for j=1,2,...,NCTOTL.

          10:  CVEC(N) -- DOUBLE PRECISION array                      Input
               On entry: with LINOBJ = .TRUE., CVEC must contain the
               coefficients of the objective function. If LINOBJ = .FALSE.,
               then CVEC is not referenced.

          11:  LINOBJ -- LOGICAL                                      Input
               On entry: indicates whether or not a linear objective
               function is present. If LINOBJ = .TRUE., then the full LP
               problem is solved, but if LINOBJ = .FALSE., only a feasible
               point is found and the array CVEC is not referenced.

          12:  X(N) -- DOUBLE PRECISION array                  Input/Output
               On entry: an estimate of the solution, or of a feasible
               point. Even when LINOBJ = .TRUE. it is not necessary for the
               point supplied in X to be feasible. In the absence of better
               information all elements of X may be set to zero. On exit:
               the solution to the LP problem when LINOBJ = .TRUE., or a
               feasible point when LINOBJ = .FALSE..

               When no feasible point exists (see IFAIL = 1 in Section 6)
               then X contains the point for which the sum of the
               infeasibilities is a minimum. On return with IFAIL = 2, 3 or
               4, X contains the point at which E04MBF terminated.

          13:  ISTATE(NCTOTL) -- INTEGER array                       Output
               On exit: with IFAIL < 5, ISTATE indicates the status of
               every constraint at the final point. The first n elements of
               ISTATE refer to the upper and lower bounds on the variables
               and when NCLIN > 0 the next NCLIN elements refer to the
               general constraints.

               Their meaning is:
               ISTATE(j) Meaning

               -2        The constraint violates its lower bound. This
                         value cannot occur for any element of ISTATE when
                         a feasible point has been found.

               -1        The constraint violates its upper bound. This
                         value cannot occur for any element of ISTATE when
                         a feasible point has been found.

               0         The constraint is not in the working set (is not
                         active) at the final point. Usually this means
                         that the constraint lies strictly between its
                         bounds.

               1         This inequality constraint is in the working set
                         (is active) at its lower bound.

               2         This inequality constraint is in the working set
                         (is active) at its upper bound.

               3         This constraint is included in the working set (is
                         active) as an equality. This value can only occur
                         when BL(j) = BU(j).

          14:  OBJLP -- DOUBLE PRECISION                             Output
               On exit: when LINOBJ = .TRUE., then on successful exit,
               OBJLP contains the value of the objective function at the
               solution, and on exit with IFAIL = 2, 3 or 4, OBJLP contains
               the value of the objective function at the point returned in
               X.

               When LINOBJ = .FALSE., then on successful exit OBJLP will be
               zero and on return with IFAIL = 1, OBJLP contains the
               minimum sum of the infeasibilities corresponding to the
               point returned in X.

          15:  CLAMDA(NCTOTL) -- DOUBLE PRECISION array              Output
               On exit: when LINOBJ = .TRUE., then on successful exit, or
               on exit with IFAIL = 2, 3, or 4, CLAMDA contains the
               Lagrange multipliers (reduced costs) for each constraint
               with respect to the working set. The first n components of
               CLAMDA contain the multipliers for the bound constraints on
               the variables and the remaining NCLIN components contain the
               multipliers for the general linear constraints.

               If ISTATE(j) = 0 so that the jth constraint is not in the
               working set then CLAMDA(j) is zero. If X is optimal and
               ISTATE(j) = 1, then CLAMDA(j) should be non-negative, and if
               ISTATE(j) = 2, then CLAMDA(j) should be non-positive.

               When LINOBJ = .FALSE., all NCTOTL elements of CLAMDA are
               returned as zero.

          16:  IWORK(LIWORK) -- INTEGER array                     Workspace

          17:  LIWORK -- INTEGER                                      Input
               On entry: the length of the array IWORK as declared in the
               (sub)program from which E04MBF is called. Constraint:
               LIWORK>=2*N.

          18:  WORK(LWORK) -- DOUBLE PRECISION array              Workspace

          19:  LWORK -- INTEGER                                       Input
               On entry: the length of the array WORK as declared in the
               (sub)program from which E04MBF is called. Constraints:
               when N <= NCLIN then
                               2
                     LWORK>=2*N +6*N+4*NCLIN+NROWA;

               when 0 <= NCLIN < N then
                                       2
                     LWORK>=2*(NCLIN+1) +4*NCLIN+6*N+NROWA.

          20:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit. To suppress
               the output of an error message when soft failure occurs, set
               IFAIL to 1.

          5.1. Description of the Printed Output

          When MSGLVL = 1, then E04MBF will produce output on the advisory
          message channel (see X04ABF ), giving information on the final
          point. The following describes the printout associated with each
          variable.

          Output         Meaning

          VARBL          The name (V) and index j, for j=1,2,...,n, of the
                         variable.

          STATE          The state of the variable. (FR if neither bound is
                         in the working set, EQ for a fixed variable, LL if
                         on its lower bound, UL if on its upper bound and
                         TB if held on a temporary bound.) If the value of
                         the variable lies outside the upper or lower bound
                         then STATE will be ++ or -- respectively.

          VALUE          The value of the variable at the final iteration.

          LOWER BOUND    The lower bound specified for the variable.

          UPPER BOUND    The upper bound specified for the variable.

          LAGR MULT      The value of the Lagrange multiplier for the
                         associated bound.

          RESIDUAL       The difference between the value of the variable
                         and the nearer of its bounds.

          For each of the general constraints the printout is as above with
          refers to the jth element of Ax, except that VARBL is replaced
          by:

          LNCON          The name (L) and index j, for j = 1,2,...,NCLIN of
                         the constraint.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          Note: when MSGLVL=1 a short description of the error is printed.

          IFAIL= 1
               No feasible point could be found. Moving violated
               constraints so that they are satisfied at the point returned
               in X gives the minimum moves necessary to make the LP
               problem feasible.

          IFAIL= 2
               The solution to the LP problem is unbounded.

          IFAIL= 3
               A total of 50 changes were made to the working set without
               altering x. Cycling is probably occurring. The user should
               consider using E04NAF with MSGLVL >= 5 to monitor constraint
               additions and deletions in order to determine whether or not
               cycling is taking place.

          IFAIL= 4
               The limit on the number of iterations has been reached.
               Increase ITMAX or consider using E04NAF to monitor progress.

          IFAIL= 5
               An input parameter is invalid. Unless MSGLVL < 0 a message
               will be printed.

          IFAILOverflow
               If the printed output before the overflow occurred contains
               a warning about serious ill-conditioning in the working set
               when adding the jth constraint, then either the user should
               try using E04NAF and experiment with the magnitude of FEATOL
               (j) in that routine, or the offending linearly dependent
               constraint (with index j) should be removed from the
               problem.

          7. Accuracy

          The routine implements a numerically stable active set strategy
          and returns solutions that are as accurate as the condition of
          the LP problem warrants on the machine.

          8. Further Comments

          The time taken by each iteration is approximately proportional to
               2      2
          min(n ,NCLIN ).

          Sensible scaling of the problem is likely to reduce the number of
          iterations required and make the problem less sensitive to
          perturbations in the data, thus improving the condition of the LP
          problem. In the absence of better information it is usually
          sensible to make the Euclidean lengths of each constraint of
          comparable magnitude. See Gill et al [1] for further information
          and advice.

          Note that the routine allows constraints to be violated by an
          absolute tolerance equal to the machine precision (see X02AJF(*))

          9. Example

          To minimize the function

                    -0.02x -0.2x -0.2x -0.2x -0.2x +0.04x +0.04x
                          1     2     3     4     5      6      7

          subject to the bounds

                                -0.01 <= x  <= 0.01
                                          1
                                -0.1 <= x  <= 0.15,
                                         2
                                -0.01 <= x  <= 0.03,
                                          3
                                -0.04 <= x  <= 0.02,
                                          4
                                -0.1 <= x  <= 0.05,
                                         5
                                -0.01 <= x
                                          6
                                -0.01 <= x
                                          7

          and the general constraints

                             x +x +x +x +x +x +x =-0.13
                              1  2  3  4  5  6  7

              0.15x +0.04x +0.02x +0.04x +0.02x +0.01x +0.03x <=-0.0049
                   1      2      3      4      5      6      7

                 0.03x +0.05x +0.08x +0.02x +0.06x +0.01x <=-0.0064
                      1      2      3      4      5      6

                     0.02x +0.04x +0.01x +0.02x +0.02x <=-0.0037
                          1      2      3      4      5

                            0.02x +0.03x +0.01x <=-0.0012
                                 1      2      5

                 -0.0992<=0.70x +0.75x +0.80x +0.75x +0.80x +0.97x
                               1      2      3      4      5      6

           -0.003<=0.02x +0.06x +0.08x +0.12x +0.02x +0.01x +0.97x <=0.002
                        1      2      3      4      5      6      7

          The initial point, which is infeasible, is

                                                                 T
                  x =(-0.01, -0.03, 0.0, -0.01, -0.1, 0.02, 0.01) .
                   0

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04naf}{NAG On-line Documentation: e04naf}
\beginscroll
\begin{verbatim}



     E04NAF(3NAG)      Foundation Library (12/10/92)      E04NAF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04NAF
                  E04NAF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04NAF is a comprehensive routine for solving quadratic
          programming (QP) or linear programming (LP) problems. It is not
          intended for large sparse problems.

          2. Specification

                 SUBROUTINE E04NAF (ITMAX, MSGLVL, N, NCLIN, NCTOTL, NROWA,
                1                   NROWH, NCOLH, BIGBND, A, BL, BU, CVEC,
                2                   FEATOL, HESS, QPHESS, COLD, LP, ORTHOG,
                3                   X, ISTATE, ITER, OBJ, CLAMDA, IWORK,
                4                   LIWORK, WORK, LWORK, IFAIL)
                 INTEGER          ITMAX, MSGLVL, N, NCLIN, NCTOTL, NROWA,
                1                 NROWH, NCOLH, ISTATE(NCTOTL), ITER, IWORK
                2                 (LIWORK), LIWORK, LWORK, IFAIL
                 DOUBLE PRECISION BIGBND, A(NROWA,N), BL(NCTOTL),
                1                 BU(NCTOTL), CVEC(N), FEATOL(NCTOTL), HESS
                2                 (NROWH,NCOLH), X(N), OBJ, CLAMDA(NCTOTL),
                3                 WORK(LWORK)
                 LOGICAL          COLD, LP, ORTHOG
                 EXTERNAL         QPHESS

          3. Description

          E04NAF is essentially identical to the subroutine SOL/QPSOL
          described in Gill et al [1].

          E04NAF is designed to solve the quadratic programming (QP)
          problem - the minimization of a quadratic function subject to a
          set of linear constraints on the variables. The problem is
          assumed to be stated in the following form:

                           T   1 T                    (x )
                Minimize  c x+ -x Hx   subject to  l<=(Ax)<=u ,         (1)
                               2

          where c is a constant n-vector and H is a constant n by n
          symmetric matrix; note that H is the Hessian matrix (matrix of
          second partial derivatives) of the quadratic objective function.
          The matrix A is m by n, where m may be zero; A is treated as a
          dense matrix.

          The constraints involving A will be called the general
          constraints. Note that upper and lower bounds are specified for
          all the variables and for all the general constraints. The form
          of (1) allows full generality in specifying other types of
          constraints. In particular, an equality constraint is specified
          by setting l =u . If certain bounds are not present, the
                      i  i
          associated elements of l or u can be set to special values that
          will be treated as -infty or +infty.

          The user must supply an initial estimate of the solution to (1),
          and a subroutine that computes the product Hx for any given
          vector x. If H is positive-definite or positive semi-definite,
          E04NAF will obtain a global minimum; otherwise, the solution
          obtained will be a local minimum (which may or may not be a
          global minimum). If H is defined as the zero matrix, E04NAF will
          solve the resulting linear programming (LP) problem; however,
          this can be accomplished more efficiently by setting a logical
          variable in the call of the routine (see the parameter LP in
          Section 5).

          E04NAF allows the user to provide the indices of the constraints
          that are believed to be exactly satisfied at the solution. This
          facility, known as a warm start, can lead to significant savings
          in computational effort when solving a sequence of related
          problems.

          The method has two distinct phases. In the first (the LP phase),
          an iterative procedure is carried out to determine a feasible
          point. In this context, feasibility is defined by a user-provided
          array FEATOL; the jth constraint is considered satisfied if its
          violation does not exceed FEATOL(j). The second phase (the QP
          phase) generates a sequence of feasible iterates in order to
          minimize the quadratic objective function. In both phases, a
          subset of the constraints - called the working set - is used to
          define the search direction at each iteration; typically, the
          working set includes constraints that are satisfied to within the
          corresponding tolerances in the FEATOL array.

          We now briefly describe a typical iteration in the QP phase. Let
          x  denote the estimate of the solution at the kth iteration; the
           k
          next iterate is defined by

                                 x   =x +(alpha) p
                                  k+1  k        k k

          where p  is an n-dimensional search direction and (alpha)  is a
                 k                                                 k
          scalar step length. Assume that the working (active) set contains
          t  linearly independent constraints, and let C  denote the matrix
           k                                            k
          of coefficients of the bounds and general constraints in the
          current working set.

          Let Z  denote a matrix whose columns form a basis for the null
               k
          space of C , so that C Z =0. (Note that Z  has n  columns, where
                    k           k k                k      z
                                T
          n =n-t .) The vector Z (c+Hx ) is called the projected gradient
           z    k               k     k
          at x . If the projected gradient is zero at x  (i.e., x  is a
              k                                        k         k
          constrained stationary point in the subspace defined by Z ),
                                                                   k
          Lagrange multipliers (lambda)  are defined as the solution of the
                                       k
          compatible overdetermined system

                                T
                               C (lambda) =c+Hx                         (2)
                                k        k     k

          The Lagrange multiplier (lambda) corresponding to an inequality
          constraint in the working set is said to be optimal if
          (lambda)<=0 when the associated constraint is at its upper bound,
          or if (lambda)>=0 when the associated constraint is at its lower
          bound. If a multiplier is non-optimal, the objective function can
          be reduced by deleting the corresponding constraint (with index
          JDEL, see Section 5.1) from the working set.

          If the projected gradient at x  is non-zero, the search direction
                                        k
          p  is defined as
           k

                                    p =Z p                              (3)
                                     k  k z

          where p  is an n -vector. In effect, the constraints in the
                 z        z
          working set are treated as equalities, by constraining p  to lie
                                                                  k
          within the subspace of vectors orthogonal to the rows of C . This
                                                                    k
          definition ensures that C p =0, and hence the values of the
                                   k k
          constraints in the working set are not altered by any move along
          p .
           k

          The vector p  is obtained by solving the equations
                      z

                                T        T
                               Z HZ p =-Z (c+Hx )                       (4)
                                k  k z   k     k

                       T
          (The matrix Z HZ  is called the projected Hessian matrix.) If the
                       k  k
          projected Hessian is positive-definite, the vector defined by (3)
          and (4) is the step to the minimum of the quadratic function in
          the subspace defined by Z .
                                   k

          If the projected Hessian is positive-definite and x +p  is
                                                             k  k
          feasible, (alpha)  will be taken as unity. In this case, the
                           k
          projected gradient at x    will be zero (see NORM ZTG in Section
                                 k+1
          5.1), and Lagrange multipliers can be computed (see Gill et al
          [2]). Otherwise, (alpha)  is set to the step to the 'nearest'
                                  k
          constraint (with index JADD, see Section 5.1), which is added to
          the working set at the next iteration.

          The matrix Z  is obtained from the TQ factorization of C , in
                      k                                           k
          which C  is represented as
                 k

                                   C Q=(0 T )                           (5)
                                    k      k

          where T  is reverse-triangular. It follows from (5) that Z  may
                 k                                                  k
          be taken as the first n  columns of Q. If the projected Hessian
                                 z
          is positive-definite, (3) is solved using the Cholesky
          factorization

                                      T     T
                                     Z HZ =R R
                                      k  k  k k

          where R  is upper triangular. These factorizations are updated as
                 k
          constraints enter or leave the working set (see Gill et al [2]
          for further details).

          An important feature of E04NAF is the treatment of indefiniteness
          in the projected Hessian. If the projected Hessian is positive-
          definite, it may become indefinite only when a constraint is
          deleted from the working set. In this case, a temporary
          modification (of magnitude HESS MOD, see Section 5.1) is added to
          the last diagonal element of the Cholesky factor. Once a
          modification has occurred, no further constraints are deleted
          from the working set until enough constraints have been added so
          that the projected Hessian is again positive-definite. If
          equation (1) has a finite solution, a move along the direction
          obtained by solving (4) with the modified Cholesky factor must
          encounter a constraint that is not already in the working set.

          In order to resolve indefiniteness in this way, we must ensure
          that the projected Hessian is positive-definite at the first
          iterate in the QP phase. Given the n  by n  projected Hessian, a
                                              z     z
          step-wise Cholesky factorization is performed with symmetric
          interchanges (and corresponding rearrangement of the columns of Z
          ), terminating if the next step would cause the matrix to become
          indefinite. This determines the largest possible positive-
          definite principal sub-matrix of the (permuted) projected
          Hessian. If n  steps of the Cholesky factorization have been
                       R
          successfully completed, the relevant projected Hessian is an n
                                                                        R
                                          T
          by n  positive-definite matrix Z HZ , where Z  comprises the
              R                           R  R         R
          first n  columns of Z. The quadratic function will subsequently
                 R
          be minimized within subspaces of reduced dimension until the full
          projected Hessian is positive-definite.

          If a linear program is being solved and there are fewer general
          constraints than variables, the method moves from one vertex to
          another while minimizing the objective function. When necessary,
          an initial vertex is defined by temporarily fixing some of the
          variables at their initial values.

          Several strategies are used to control ill-conditioning in the
          working set. One such strategy is associated with the FEATOL
          array. Allowing the jth constraint to be violated by as much as
          FEATOL(j) often provides a choice of constraints that could be
          added to the working set. When a choice exists, the decision is
          based on the conditioning of the working set. Negative steps are
          occasionally permitted, since x  may violate the constraint to be
                                         k
          added.

          4. References

          [1]   Gill P E, Murray W, Saunders M A and Wright M H (1983)
                User's Guide for SOL/QPSOL. Report SOL 83-7. Department of
                Operations Research, Stanford University.

          [2]   Gill P E, Murray W, Saunders M A and Wright M H (1982) The
                design and implementation of a quadratic programming
                algorithm. Report SOL 82-7. Department of Operations
                Research, Stanford University.

          [3]   Gill P E, Murray W and Wright M H (1981) Practical
                Optimization. Academic Press.

          5. Parameters

           1:  ITMAX -- INTEGER                                       Input
               On entry: an upper bound on the number of iterations to be
               taken during the LP phase or the QP phase. If ITMAX is not
               positive, then the value 50 is used in place of ITMAX.

           2:  MSGLVL -- INTEGER                                      Input
               On entry: MSGLVL must indicate the amount of intermediate
               output desired (see Section 5.1 for a description of the
               printed output). All output is written to the current
               advisory message unit (see X04ABF). For MSGLVL >= 10, each
               level includes the printout for all lower levels.
               Value   Definition

               <0      No printing.

               0       Printing only if an input parameter is incorrect, or
                       if the working set is so ill-conditioned that
                       subsequent overflow is likely. This setting is
                       strongly recommended in preference to MSGLVL < 0.

               1       The final solution only.

               5       One brief line of output for each constraint
                       addition or deletion (no printout of the final
                       solution).

               >=10    The final solution and one brief line of output for
                       each constraint addition or deletion.

               >=15    At each iteration, X, ISTATE, and the indices of the
                       free variables (i.e.,the variables not currently
                       held on a bound).

               >=20    At each iteration, the Lagrange multiplier estimates
                       and the general constraint values.

               >=30    At each iteration, the diagonal elements of the
                       matrix T associated with the TQ factorization of the
                       working set, and the diagonal elements of the
                       Cholesky factor R of the projected Hessian.

               >=80    Debug printout.

               99      The arrays CVEC and HESS.

           3:  N -- INTEGER                                           Input
               On entry: the number, n, of variables. Constraint: N >= 1.

           4:  NCLIN -- INTEGER                                       Input
               On entry: the number of general linear constraints in the
               problem. Constraint: NCLIN >= 0.

           5:  NCTOTL -- INTEGER                                      Input
               On entry: the value (N+NCLIN).

           6:  NROWA -- INTEGER                                       Input
               On entry:
               the first dimension of the array A as declared in the
               (sub)program from which E04NAF is called.
               Constraint: NROWA >= max(1,NCLIN).

           7:  NROWH -- INTEGER                                       Input
               On entry: the first dimension of the array HESS as declared
               in the (sub)program from which E04NAF is called.
               Constraint: NROWH >= 1.

           8:  NCOLH -- INTEGER                                       Input
               On entry: the column dimension of the array HESS as declared
               in the (sub)program from which E04NAF is called.
               Constraint: NCOLH >= 1.

           9:  BIGBND -- DOUBLE PRECISION                             Input
               On entry: BIGBND must denote an 'infinite' component of l
               and u. Any upper bound greater than or equal to BIGBND will
               be regarded as plus infinity, and a lower bound less than or
               equal to -BIGBND will be regarded as minus infinity.
               Constraint: BIGBND > 0.0.

          10:  A(NROWA,N) -- DOUBLE PRECISION array                   Input
               On entry: the leading NCLIN by n part of A must contain the
               NCLIN general constraints, with the ith constraint in the i
               th row of A. If NCLIN = 0, then A is not referenced.

          11:  BL(NCTOTL) -- DOUBLE PRECISION array                   Input
               On entry: the lower bounds for all the constraints, in the
               following order. The first n elements of BL must contain the
               lower bounds on the variables. If NCLIN > 0, the next NCLIN
               elements of BL must contain the lower bounds for the general
               linear constraints. To specify a non-existent lower bound
               (i.e., l =-infty), the value used must satisfy BL(j)<=-
                       j
               BIGBND To specify the jth constraint as an equality, the
               user must set BL(j) = BU(j). Constraint: BL(j) <= BU(j),
               j=1,2,...,NCTOTL.

          12:  BU(NCTOTL) -- DOUBLE PRECISION array                   Input
               On entry: the upper bounds for all the constraints, in the
               following order. The first n elements of BU must contain the
               upper bounds on the variables. If NCLIN > 0, the next NCLIN
               elements of BU must contain the upper bounds for the general
               linear constraints. To specify a non-existent upper bound
               (i.e., u =+infty), the value used must satisfy BU(j) >=
                       j
               BIGBND. To specify the jth constraint as an equality, the
               user must set BU(j) = BL(j). Constraint: BU(j) >= BL(j),
               j=1,2,...,NCTOTL.

          13:  CVEC(N) -- DOUBLE PRECISION array                      Input
               On entry: the coefficients of the linear term of the
               objective function (the vector c in equation  (1)).

          14:  FEATOL(NCTOTL) -- DOUBLE PRECISION array               Input
               On entry: a set of positive tolerances that define the
               maximum permissible absolute violation in each constraint in
               order for a point to be considered feasible, i.e., if the
               violation in constraint j is less than FEATOL(j), the point
               is considered to be feasible with respect to the jth
               constraint. The ordering of the elements of FEATOL is the
               same as that described above for BL.

               The elements of FEATOL should not be too small and a warning
               message will be printed on the current advisory message
               channel if any element of FEATOL is less than the machine
               precision (see X02AJF(*)). As the elements of FEATOL
               increase, the algorithm is less likely to encounter
               difficulties with ill-conditioning and degeneracy. However,
               larger values of FEATOL(j) mean that constraint j could be
               violated by a significant amount. It is recommended that
               FEATOL(j) be set to a value equal to the largest acceptable
               violation for constraint j. For example, if the data
               defining the constraints are of order unity and are correct
               to about 6 decimal digits, it would be appropriate to choose
                              -6
               FEATOL(j) as 10   for all relevant j. Often the square root
               of the machine precision is a reasonable choice if the
               constraint is well scaled.

          15:  HESS(NROWH,NCOLH) -- DOUBLE PRECISION array            Input
               On entry: HESS may be used to store the Hessian matrix H of
               equation (1) if desired. HESS is accessed only by the
               subroutine QPHESS and is not accessed if LP = .TRUE.. Refer
               to the specification of QPHESS (below) for further details
               of how HESS may be used to pass data to QPHESS.

          16:  QPHESS -- SUBROUTINE, supplied by the user.
                                                         External Procedure
               QPHESS must define the product of the Hessian matrix H and a
               vector x. The elements of H need not be defined explicitly.
               QPHESS is not accessed if LP is set to .TRUE. and in this
               case QPHESS may be the dummy routine E04NAN. (E04NAN is
               included in the NAG Foundation Library and so need not be
               supplied by the user. Its name may be implementation-
               dependent: see the Users' Note for your implementation for
               details.)

               Its specification is:

                      SUBROUTINE QPHESS (N, NROWH, NCOLH, JTHCOL,
                     1                   HESS, X, HX)
                      INTEGER          N, NROWH, NCOLH, JTHCOL
                      DOUBLE PRECISION HESS(NROWH,NCOLH), X(N), HX(N)

                1:  N -- INTEGER                                      Input
                    On entry: the number n of variables.

                2:  NROWH -- INTEGER                                  Input
                    On entry: the row dimension of the array HESS.

                3:  NCOLH -- INTEGER                                  Input
                    On entry: the column dimension of the array HESS.

                4:  JTHCOL -- INTEGER                                 Input
                    The input parameter JTHCOL is included to allow
                    flexibility for the user in the special situation when
                    x is the jth co-ordinate vector (i.e.,the jth column of
                    the identity matrix). This may be of interest because
                    the product Hx is then the jth column of H, which can
                    sometimes be computed very efficiently. The user may
                    code QPHESS to take advantage of this case. On entry:
                    if JTHCOL = j, where j>0, HX must contain column JTHCOL
                    of H, and hence special code may be included in QPHESS
                    to test JTHCOL if desired. However, special code is not
                    necessary, since the vector x always contains column
                    JTHCOL of the identity matrix whenever QPHESS is called
                    with JTHCOL > 0.

                5:  HESS(NROWH,NCOLH) -- DOUBLE PRECISION array       Input
                    On entry: the Hessian matrix H.

                    In some cases, it may be desirable to use a one-
                    dimensional array to transmit data or workspace to
                    QPHESS; HESS should then be declared with dimension
                    (NROWH) in the (sub)program from which E04NAF is called
                    and the parameter NCOLH must be 1.

                    In other situations, it may be desirable to compute Hx
                    without accessing HESS - for example, if H is sparse or
                    has special structure. (This is illustrated in the
                    subroutine QPHES1 in the example program in Section 9.)
                    The parameters HESS, NROWH and NCOLH may then refer to
                    any convenient array.

                    When MSGLVL = 99, the (possibly undefined) contents of
                    HESS will be printed, except if NROWH and NCOLH are
                    both 1. Also printed are the results of calling QPHESS
                    with JTHCOL = 1,2,...,n.

                6:  X(N) -- DOUBLE PRECISION array                    Input
                    On entry: the vector x.

                7:  HX(N) -- DOUBLE PRECISION array                  Output
                    On exit: HX must contain the product Hx.
               QPHESS must be declared as EXTERNAL in the (sub)program
               from which E04NAF is called. Parameters denoted as
               Input must not be changed by this procedure.

          17:  COLD -- LOGICAL                                        Input
               On entry: COLD must indicate whether the user has specified
               an initial estimate of the active set of constraints. If
               COLD is set to .TRUE., the initial working set is determined
               by E04NAF. If COLD is set to .FALSE. (a 'warm start'), the
               user must define the ISTATE array which gives the status of
               each constraint with respect to the working set. E04NAF will
               override the user's specification of ISTATE if necessary, so
               that a poor choice of working set will not cause a fatal
               error.

               The warm start option is particularly useful when E04NAF is
               called repeatedly to solve related problems.

          18:  LP -- LOGICAL                                          Input
               On entry: if LP = .FALSE., E04NAF will solve the specified
               quadratic programming problem. If LP = .TRUE., E04NAF will
               treat H as zero and solve the resulting linear programming
               problem; in this case, the parameters HESS and QPHESS will
               not be referenced.

          19:  ORTHOG -- LOGICAL                                      Input
               On entry: ORTHOG must indicate whether orthogonal
               transformations are to be used in computing and updating the
               TQ factorization of the working set
                                        A Q=(0 T),
                                         s
               where A  is a sub-matrix of A and T is reverse-triangular.
                      s
               If ORTHOG = .TRUE., the TQ factorization is computed using
               Householder reflections and plane rotations, and the matrix
               Q is orthogonal. If ORTHOG = .FALSE., stabilized elementary
               transformations are used to maintain the factorization, and
               Q is not orthogonal. A rule of thumb in making the choice is
               that orthogonal transformations require more work, but
               provide greater numerical stability. Thus, we recommend
               setting ORTHOG to .TRUE. if the problem is reasonably small
               or the active set is ill-conditioned. Otherwise, setting
               ORTHOG to .FALSE. will often lead to a reduction in solution
               time with negligible loss of reliability.

          20:  X(N) -- DOUBLE PRECISION array                  Input/Output
               On entry: an estimate of the solution. In the absence of
               better information all elements of X may be set to zero. On
               exit: from E04NAF, X contains the best estimate of the
               solution.

          21:  ISTATE(NCTOTL) -- INTEGER array                 Input/Output
               On entry: with COLD as .FALSE., ISTATE must indicate the
               status of every constraint with respect to the working set.
               The ordering of ISTATE is as follows; the first n elements
               of ISTATE refer to the upper and lower bounds on the
               variables and elements n+1 through n + NCLIN refer to the
               upper and lower bounds on Ax. The significance of each
               possible value of ISTATE(j) is as follows:
               ISTATE(j) Meaning

               -2        The constraint violates its lower bound by more
                         than FEATOL(j). This value of ISTATE cannot occur
                         after a feasible point has been found.

               -1        The constraint violates its upper bound by more
                         than FEATOL(j). This value of ISTATE cannot occur
                         after a feasible point has been found.

               0         The constraint is not in the working set. Usually,
                         this means that the constraint lies strictly
                         between its bounds.

               1         This inequality constraint is included in the
                         working set at its lower bound. The value of the
                         constraint is within FEATOL(j) of its lower bound.

               2         This inequality constraint is included in the
                         working set at its upper bound. The value of the
                         constraint is within FEATOL(j) of its upper bound.

               3         The constraint is included in the working set as
                         an equality. This value of ISTATE can occur only
                         when BL(j) = BU(j). The corresponding constraint
                         is within FEATOL(j) of its required value.
               If COLD = .TRUE., ISTATE need not be set by the user.
               However, when COLD = .FALSE., every element of ISTATE must
               be set to one of the values given above to define a
               suggested initial working set (which will be changed by
               E04NAF if necessary). The most likely values are:
               ISTATE(j) Meaning

               0         The corresponding constraint should not be in the
                         initial working set.

               1         The constraint should be in the initial working
                         set at its lower bound.

               2         The constraint should be in the initial working
                         set at its upper bound.

               3         The constraint should be in the initial working
                         set as an equality. This value must not be
                         specified unless BL(j) = BU(j). The values 1, 2 or
                         3 all have the same effect when BL(j) = BU(j).
               Note that if E04NAF has been called previously with the same
               values of N and NCLIN, ISTATE already contains satisfactory
               values. On exit: when E04NAF exits with IFAIL set to 0, 1 or
               3, the values in the array ISTATE indicate the status of the
               constraints in the active set at the solution. Otherwise,
               ISTATE indicates the composition of the working set at the
               final iterate.

          22:  ITER -- INTEGER                                       Output
               On exit: the number of iterations performed in either the LP
               phase or the QP phase, whichever was last entered.

               Note that ITER is reset to zero after the LP phase.

          23:  OBJ -- DOUBLE PRECISION                               Output
               On exit: the value of the quadratic objective function at x
               if x is feasible (IFAIL <= 5), or the sum of infeasibilities
               at x otherwise (6 <= IFAIL <= 8).

          24:  CLAMDA(NCTOTL) -- DOUBLE PRECISION array              Output
               On exit: the values of the Lagrange multiplier for each
               constraint with respect to the current working set. The
               ordering of CLAMDA is as follows; the first n components
               contain the multipliers for the bound constraints on the
               variables, and the remaining components contain the
               multipliers for the general linear constraints. If ISTATE(j)
               = 0 (i.e.,constraint j is not in the working set), CLAMDA(j)
               is zero. If x is optimal and ISTATE(j) = 1, CLAMDA(j) should
               be non-negative; if ISTATE(j) = 2, CLAMDA(j) should be non-
               positive.

          25:  IWORK(LIWORK) -- INTEGER array                     Workspace

          26:  LIWORK -- INTEGER                                      Input
               On entry:
               the dimension of the array IWORK as declared in the
               (sub)program from which E04NAF is called.
               Constraint: LIWORK>=2*N.

          27:  WORK(LWORK) -- DOUBLE PRECISION array              Workspace

          28:  LWORK -- INTEGER                                       Input
               On entry:
               the dimension of the array WORK as declared in the
               (sub)program from which E04NAF is called.
               Constrai if LP = .FALSE. or NCLIN >= N then
               nts:               2
                        LWORK>=2*N +4*N*NCLIN+NROWA.

                        if LP = .TRUE. and NCLIN < N then
                                          2
                        LWORK>=2*(NCLIN+1) +4*N+2*NCLIN+NROWA.
               If MSGLVL > 0, the amount of workspace provided and the
               amount of workspace required are output on the current
               advisory message unit (as defined by X04ABF). As an
               alternative to computing LWORK from the formula given above,
               the user may prefer to obtain an appropriate value from the
               output of a preliminary run with a positive value of MSGLVL
               and LWORK set to 1 (E04NAF will then terminate with IFAIL =
               9).

          29:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit. To suppress
               the output of an error message when soft failure occurs, set
               IFAIL to 1.

               IFAIL contains zero on exit if x is a strong local minimum.
               i.e., the projected gradient is neglible, the Lagrange
               multipliers are optimal, and the projected Hessian is
               positive-definite. In some cases, a zero value of IFAIL
               means that x is a global minimum (e.g. when the Hessian
               matrix is positive-definite).

          5.1. Description of the Printed Output

          When MSGLVL >= 5, a line of output is produced for every change
          in the working set (thus, several lines may be printed during a
          single iteration).

          To aid interpretation of the printed results, we mention the
          convention for numbering the constraints: indices 1 through to n
          refer to the bounds on the variables, and when NCLIN > 0 indices
          n+1 through to n + NCLIN refer to the general constraints. When
          the status of a constraint changes, the index of the constraint
          is printed, along with the designation L (lower bound), U (upper
          bound) or E (equality).

          In the LP phase, the printout includes the following:

          ITN            is the iteration count.

          JDEL           is the index of the constraint deleted from the
                         working set. If JDEL is zero, no constraint was
                         deleted.

          JADD           is the index of the constraint added to the
                         working set. If JADD is zero, no constraint was
                         added.

          STEP           is the step taken along the computed search
                         direction.

          COND T         is a lower bound on the condition number of the
                         matrix of predicted active constraints.

          NUMINF         is the number of violated constraints
                         (infeasibilities).

          SUMINF         is a weighted sum of the magnitudes of the
                         constraint violations.

                                                                        T
          LPOBJ          is the value of the linear objective function c x.
                         It is printed only if LP = .TRUE..

          During the QP phase, the printout includes the following:

          ITN            is the iteration count (reset to zero after the LP
                         phase).

          JDEL           is the index of the constraint deleted from the
                         working set. If JDEL is zero, no constraint was
                         deleted.

          JADD           is the index of the constraint added to the
                         working set. If JADD is zero, no constraint was
                         added.

          STEP           is the step (alpha)  taken along the direction of
                                            k
                         search (if STEP is 1.0, the current point is a
                         minimum in the subspace defined by the current
                         working set).

          NHESS          is the number of calls to subroutine QPHESS.

          OBJECTIVE      is the value of the quadratic objective function.

          NCOLZ          is the number of columns of Z (see Section 3). In
                         general, it is the dimension of the subspace in
                         which the quadratic is currently being minimized.

          NORM GFREE     is the Euclidean norm of the gradient of the
                         objective function with respect to the free
                         variables, i.e. variables not currently held at a
                         bound (NORM GFREE is not printed if ORTHOG = .
                         FALSE.). In some cases, the objective function and
                         gradient are updated rather than recomputed. If
                         so, this entry will be -- to indicate that the
                         gradient with respect to the free variables has
                         not been computed.

          NORM QTG       is a weighted norm of the gradient of the
                         objective function with respect to the free
                         variables (NORM QTG is not printed if ORTHOG = .
                         TRUE.). In some cases, the objective function and
                         gradient are updated rather than recomputed. If
                         so, this entry will be -- to indicate that the
                         gradient with respect to the free variables has
                         not been computed.

          NORM ZTG       is the Euclidean norm of the projected gradient
                         (see Section 3).

          COND T         is a lower bound on the condition number of the
                         matrix of constraints in the working set.

          COND ZHZ       is a lower bound on the condition number of the
                         projected Hessian matrix.

          HESS MOD       is the correction added to the diagonal of the
                         projected Hessian to ensure that a satisfactory
                         Cholesky factorization exists (see Section 3).
                         When the projected Hessian is sufficiently
                         positive-definite, HESS MOD will be zero.

          When MSGLVL = 1 or MSGLVL >= 10, the summary printout at the end
          of execution of E04NAF includes a listing of the status of every
          constraint. Note that default names are assigned to all variables
          and constraints.

          The following describes the printout for each variable.

          VARBL          is the name (V) and index j, j=1,2,...,n, of the
                         variable.

          STATE          gives the state of the variable (FR if neither
                         bound is in the working set, EQ if a fixed
                         variable, LL if on its lower bound, UL if on its
                         upper bound, TB if held on a temporary bound). If
                         VALUE lies outside the upper or lower bounds by
                         more than FEATOL(j), STATE will be ++ or --
                         respectively.

          VALUE          is the value of the variable at the final
                         iteration.

          LOWER BOUND    is the lower bound specified for the variable.

          UPPER BOUND    is the upper bound specified for the variable.

          LAGR MULT      is the value of the Lagrange multiplier for the
                         associated bound constraint. This will be zero if
                         STATE is FR. If x is optimal and STATE is LL, the
                         multiplier should be non-negative; if STATE is UL,
                         the multiplier should be non-positive.

          RESIDUAL       is the difference between the variable and the
                         nearer of its bounds BL(j) and BU(j).

          For each of the general constraints the printout is as above with
          refers to the jth element of Ax, except that VARBL is replaced by

          LNCON          The name (L) and index j, j=1,2,...,NCLIN, of the
                         constraint.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          IFAIL= 1
               x is a weak local minimum (the projected gradient is
               negligible, the Lagrange multipliers are optimal, but the
               projected Hessian is only semi-definite). This means that
               the solution is not unique.

          IFAIL= 2
               The solution appears to be unbounded, i.e., the quadratic
               function is unbounded below in the feasible region. This
               value of IFAIL occurs when a step of infinity would have to
               be taken in order to continue the algorithm.

          IFAIL= 3
               x appears to be a local minimum, but optimality cannot be
               verified because some of the Lagrange multipliers are very
               small in magnitude.

               E04NAF has probably found a solution. However, the presence
               of very small Lagrange multipliers means that the predicted
               active set may be incorrect, or that x may be only a
               constrained stationary point rather than a local minimum.
               The method in E04NAF is not guaranteed to find the correct
               active set when there are very small multipliers. E04NAF
               attempts to delete constraints with zero multipliers, but
               this does not necessarily resolve the issue. The
               determination of the correct active set is a combinatorial
               problem that may require an extremely large amount of time.
               The occurrence of small multipliers often (but not always)
               indicates that there are redundant constraints.

          IFAIL= 4
               The iterates of the QP phase could be cycling, since a total
               of 50 changes were made to the working set without altering
               x.

               This value will occur if 50 iterations are performed in the
               QP phase without changing x. The user should check the
               printed output for a repeated pattern of constraint
               deletions and additions. If a sequence of constraint changes
               is being repeated, the iterates are probably cycling.
               (E04NAF  does not contain a method that is guaranteed to
               avoid cycling, which would be combinatorial in nature.)
               Cycling may occur in two circumstances: at a constrained
               stationary point where there are some small or zero Lagrange
               multipliers (see the discussion of IFAIL = 3); or at a point
               (usually a vertex) where the constraints that are satisfied
               exactly are nearly linearly dependent. In the latter case,
               the user has the option of identifying the offending
               dependent constraints and removing them from the problem, or
               restarting the run with larger values of FEATOL for nearly
               dependent constraints. If E04NAF terminates with IFAIL = 4,
               but no suspicious pattern of constraint changes can be
               observed, it may be worthwhile to restart with the final x
               (with or without the warm start option).

          IFAIL= 5
               The limit of ITMAX iterations was reached in the QP phase
               before normal termination occurred.

               The value of ITMAX may be too small. If the method appears
               to be making progress (e.g. the objective function is being
               satisfactorily reduced), increase ITMAX and rerun E04NAF
               (possibly using the warm start facility to specify the
               initial working set). If ITMAX is already large, but some of
               the constraints could be nearly linearly dependent, check
               the output for a repeated pattern of constraints entering
               and leaving the working set. (Near-dependencies are often
               indicated by wide variations in size in the diagonal
               elements of the T matrix, which will be printed if MSGLVL >=
               30.) In this case, the algorithm could be cycling (see the
               comments for IFAIL = 4).

          IFAIL= 6
               The LP phase terminated without finding a feasible point,
               and hence it is not possible to satisfy all the constraints
               to within the tolerances specified by the FEATOL array. In
               this case, the final iterate will reveal values for which
               there will be a feasible point (e.g. a feasible point will
               exist if the feasibility tolerance for each violated
               constraint exceeds its RESIDUAL at the final point). The
               modified problem (with altered values in FEATOL) may then be
               solved using a warm start.

               The user should check that there are no constraint
               redundancies. If the data for the jth constraint are
               accurate only to the absolute precision (delta), the user
               should ensure that the value of FEATOL(j) is greater than
               (delta). For example, if all elements of A are of order
               unity and are accurate only to three decimal places, every
                                                        -3
               component of FEATOL should be at least 10  .

          IFAIL= 7
               The iterates may be cycling during the LP phase; see the
               comments above under IFAIL = 4.

          IFAIL= 8
               The limit of ITMAX iterations was reached during the LP
               phase. See comments above under IFAIL = 5.

          IFAIL= 9
               An input parameter is invalid.

          Overflow
               If the printed output before the overflow error contains a
               warning about serious ill-conditioning in the working set
               when adding the jth constraint, it may be possible to avoid
               the difficulty by increasing the magnitude of FEATOL(j) and
               rerunning the program. If the message recurs even after this
               change, the offending linearly dependent constraint (with
               index j) must be removed from the problem. If a warning
               message did not precede the fatal overflow, the user should
               contact NAG.

          7. Accuracy

          The routine implements a numerically stable active set strategy
          and returns solutions that are as accurate as the condition of
          the QP problem warrants on the machine.

          8. Further Comments

          The number of iterations depends upon factors such as the number
          of variables and the distances of the starting point from the
          solution. The number of operations performed per iteration is
                                         2
          roughly proportional to (NFREE) , where NFREE (NFREE<=n) is the
          number of variables fixed on their upper or lower bounds.

          Sensible scaling of the problem is likely to reduce the number of
          iterations required and make the problem less sensitive to
          perturbations in the data, thus improving the condition of the QP
          problem. See the Chapter Introduction and Gill et al [1] for
          further information and advice.

          9. Example

                                    T   1 T
          To minimize the function c x+ -x Hx, where
                                        2

                                                             T
                      c=[-0.02,-0.2,-0.2,-0.2,-0.2,0.04,0.04]

                                   [2 0 0 0 0  0  0]
                                   [0 2 0 0 0  0  0]
                                   [0 0 2 2 0  0  0]
                                 H=[0 0 2 2 0  0  0]
                                   [0 0 0 0 2  0  0]
                                   [0 0 0 0 0 -2 -2]
                                   [0 0 0 0 0 -2 -2]

          subject to the bounds

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04ucf}{NAG On-line Documentation: e04ucf}
\beginscroll
\begin{verbatim}

   
   
   
   E04UCF(3NAG)                 E04UCF                  E04UCF(3NAG)
   
   
   
        E04 -- Minimizing or Maximizing a Function                 E04UCF
                E04UCF -- NAG Foundation Library Routine Document
   
        Note: Before using this routine, please read the Users' Note for
        your implementation to check implementation-dependent details.
        The symbol (*) after a NAG routine name denotes a routine that is
        not included in the Foundation Library.
   
        Note for users via the AXIOM system: the interface to this routine
        has been enhanced for use with AXIOM and is slightly different to
        that offered in the standard version of the Foundation Library.  In
        particular, the optional parameters of the NAG routine are now
        included in the parameter list.  These are described in section
        5.1.2, below.
   
        1. Purpose
   
        E04UCF is designed to minimize an arbitrary smooth function
        subject to constraints, which may include simple bounds on the
        variables, linear constraints and smooth nonlinear constraints.
        (E04UCF  may be used for unconstrained, bound-constrained and
        linearly constrained optimization.) The user must provide
        subroutines that define the objective and constraint functions
        and as many of their first partial derivatives as possible.
        Unspecified derivatives are approximated by finite differences.
        All matrices are treated as dense, and hence E04UCF is not
        intended for large sparse problems.
   
        E04UCF uses a sequential quadratic programming (SQP) algorithm in
        which the search direction is the solution of a quadratic
        programming (QP) problem. The algorithm treats bounds, linear
        constraints and nonlinear constraints separately.
   
        2. Specification
   
               SUBROUTINE E04UCF (N, NCLIN, NCNLN, NROWA, NROWJ, NROWR,
              1                   A, BL, BU, CONFUN, OBJFUN, ITER,
              2                   ISTATE, C, CJAC, CLAMDA, OBJF, OBJGRD,
              3                   R, X, IWORK, LIWORK, WORK, LWORK,
              4                   IUSER, USER, STA, CRA, DER, FEA, FUN,
              5                   HES, INFB, INFS, LINF, LINT, LIST,
              6                   MAJI, MAJP, MINI, MINP, MON, NONF,
              7                   OPT, STE, STAO, STAC, STOO, STOC, VE,
              8                   IFAIL)
               INTEGER          N, NCLIN, NCNLN, NROWA, NROWJ, NROWR,
              1                 ITER, ISTATE(N+NCLIN+NCNLN), IWORK(LIWORK)
              2                 , LIWORK, LWORK, IUSER(*), DER, MAJI,
              3                 MAJP, MINI, MINP, MON, STAO, STAC, STOO,
              4                 STOC, VE, IFAIL
               DOUBLE PRECISION A(NROWA,*), BL(N+NCLIN+NCNLN), BU
              1                 (N+NCLIN+NCNLN), C(*), CJAC(NROWJ,*),
              2                 CLAMDA(N+NCLIN+NCNLN), OBJF, OBJGRD(N), R
              3                 (NROWR,N), X(N), WORK(LWORK), USER(*),
              4                 CRA, FEA, FUN,  INFB, INFS, LINF, LINT,
              5                 NONF, OPT, STE
               LOGICAL          LIST, STA, HES
               EXTERNAL         CONFUN, OBJFUN
   
        3. Description
   
        E04UCF is designed to solve the nonlinear programming problem --
        the minimization of a smooth nonlinear function subject to a set
        of constraints on the variables. The problem is assumed to be
        stated in the following form:
   
                                                   { x  }
               Minimize      F(x)   subject to  l<={A x }<=u,         (1)
                        n                          { L  }
               x is in R                           {c(x)}
   
        where F(x), the objective function, is a nonlinear function, A
                                                                      L
        is an n  by n constant matrix, and c(x) is an n  element vector
               L                                       N
        of nonlinear constraint functions. (The matrix A  and the vector
                                                        L
        c(x) may be empty.) The objective function and the constraint
        functions are assumed to be smooth, i.e., at least twice-
        continuously differentiable. (The method of E04UCF will usually
        solve (1) if there are only isolated discontinuities away from
        the solution.)
   
        This routine is essentially identical to the subroutine SOL/NPSOL
        described in Gill et al [8].
   
        Note that upper and lower bounds are specified for all the
        variables and for all the constraints.
   
        An equality constraint can be specified by setting l =u . If
                                                            i  i
        certain bounds are not present, the associated elements of l or u
        can be set to special values that will be treated as -infty or
        +infty.
   
        If there are no nonlinear constraints in (1) and F is linear or
        quadratic then one of E04MBF, E04NAF or E04NCF(*) will generally
        be more efficient. If the problem is large and sparse the MINOS
        package (see Murtagh and Saunders [13]) should be used, since
        E04UCF treats all matrices as dense.
   
        The user must supply an initial estimate of the solution to (1),
        together with subroutines that define F(x), c(x) and as many
        first partial derivatives as possible; unspecified derivatives
        are approximated by finite differences.
   
        The objective function is defined by subroutine OBJFUN, and the
        nonlinear constraints are defined by subroutine CONFUN. On every
        call, these subroutines must return appropriate values of the
        objective and nonlinear constraints. The user should also provide
        the available partial derivatives. Any unspecified derivatives
        are approximated by finite differences; see Section 5.1 for a
        discussion of the optional parameter Derivative Level. Just
        before either OBJFUN or CONFUN is called, each element of the
        current gradient array OBJGRD or CJAC is initialised to a special
        value. On exit, any element that retains the value is estimated
        by finite differences. Note that if there are nonlinear
        costraints, then the first call to CONFUN will precede the first
        call to OBJFUN.
   
        For maximum reliability, it is preferable for the user to provide
        all partial derivatives (see Chapter 8 of Gill et al [10], for a
        detailed discussion). If all gradients cannot be provided, it is
        similarly advisable to provide as many as possible. While
        developing the subroutines OBJFUN and CONFUN, the optional
        parameter Verify (see Section 5.1) should be used to check the
        calculation of any known gradients.
   
        E04UCF implements a sequential quadratic programming (SQP)
        method. The document for E04NCF(*) should be consulted in
        conjunction with this document.
   
        In the rest of this section we briefly summarize the main
        features of the method of E04UCF. Where possible, explicit
        reference is made to the names of variables that are parameters
        of subroutines E04UCF or appear in the printed output (see
        Section 5.2).
   
        At a solution of (1), some of the constraints will be active,
        i.e., satisfied exactly. An active simple bound constraint
        implies that the corresponding variable is fixed at its bound,
        and hence the variables are partitioned into fixed and free
        variables. Let C denote the m by n matrix of gradients of the
        active general linear and nonlinear constraints. The number of
        fixed variables will be denoted by n  , with n   (n  =n-n  ) the
                                            FX        FR   FR    FX
        number of free variables. The subscripts 'FX' and 'FR' on a
        vector or matrix will denote the vector or matrix composed of the
        components corresponding to fixed or free variables.
   
        A point x is a first-order Kuhn-Tucker point for (1) (see, e.g.,
        Powell [14]) if the following conditions hold:
   
             (i) x is feasible;
   
             (ii) there exist vectors (xi) and (lambda) (the Lagrange
             multiplier vectors for the bound and general constraints)
             such that
                                  T
                               g=C (lambda)+(xi),                    (2)
             where g is the gradient of F evaluated at x, and (xi) =0 if
                                                                  j
             the jth variable is free.
   
             (iii) The Lagrange multiplier corresponding to an
             inequality constraint active at its lower bound must be
             non-negative, and non-positive for an inequality constraint
             active at its upper bound.
   
        Let Z denote a matrix whose columns form a basis for the set of
        vectors orthogonal to the rows of C  ; i.e., C  Z=0. An
                                           FR         FR
        equivalent statement of the condition (2) in terms of Z is
   
                                     T
                                    Z g  =0.
                                       FR
   
                    T
        The vector Z g   is termed the projected gradient of F at x.
                      FR
        Certain additional conditions must be satisfied in order for a
        first-order Kuhn-Tucker point to be a solution of (1) (see, e.g.,
        Powell [14]).
   
        The method of E04UCF is a sequential quadratic programming (SQP)
        method. For an overview of SQP methods, see, for example,
        Fletcher [5], Gill et al [10] and Powell [15].
   
        The basic structure of E04UCF involves major and minor
        iterations. The major iterations generate a sequence of iterates
                               *
        {x } that converge to x , a first-order Kuhn-Tucker point of (1).
          k
                                                      _
        At a typical major iteration, the new iterate x is defined by
   
                                _
                                x=x+(alpha)p                          (3)
   
        where x is the current iterate, the non-negative scalar (alpha)
        is the step length, and p is the search direction. (For
        simplicity, we shall always consider a typical iteration and
        avoid reference to the index of the iteration.) Also associated
        with each major iteration are estimates of the Lagrange
        multipliers and a prediction of the active set.
   
        The search direction p in (3) is the solution of a quadratic
        programming subproblem of the form
   
                         T   1 T                  _  { p }  _
              Minimize  g p+ -p Hp,   subject to  l<={A p}<=u,        (4)
               p             2                       { L }
                                                     {A p}
                                                     { N }
   
        where g is the gradient of F at x, the matrix H is a positive-
        definite quasi-Newton approximation to the Hessian of the
        Lagrangian function (see Section 8.3), and A  is the Jacobian
                                                    N
        matrix of c evaluated at x. (Finite-difference estimates may be
        used for g and A ; see the optional parameter Derivative Level in
                        N
        Section 5.1.) Let l in (1) be partitioned into three sections:
        l , l  and l , corresponding to the bound, linear and nonlinear
         B   L      N
                                _
        constraints. The vector l in (4) is similarly partitioned, and is
        defined as
   
                       _         _               _
                       l =l -x,  l =l -A x,  and l =l -c,
                        B  B      L  L  L         N  N
   
        where c is the vector of nonlinear constraints evaluated at x.
                   _
        The vector u is defined in an analogous fashion.
   
        The estimated Lagrange multipliers at each major iteration are
        the Lagrange multipliers from the subproblem (4) (and similarly
        for the predicted active set). (The numbers of bounds, general
        linear and nonlinear constraints in the QP active set are the
        quantities Bnd, Lin and Nln in the printed output of E04UCF.) In
        E04UCF, (4) is solved using E04NCF(*). Since solving a quadratic
        program as an iterative procedure, the minor iterations of E04UCF
        are the iterations of E04NCF(*). (More details about solving the
        subproblem are given in Section 8.1.)
   
        Certain matrices associated with the QP subproblem are relevant
        in the major iterations. Let the subscripts 'FX' and 'FR' refer
        to the predicted fixed and free variables, and let C denote the m
        by n matrix of gradients of the general linear and nonlinear
        constraints in the predicted active set. First, we have available
        the TQ factorization of C  :
                                 FR
   
                               C  Q  =(0 T),                          (5)
                                FR FR
   
        where T is a nonsingular m by m reverse-triangular matrix (i.e.,
        t  =0 if i+j<m), and the non-singular n   by n   matrix Q   is
         ij                                    FR     FR         FR
        the product of orthogonal transformations (see Gill et al [6]).
        Second, we have the upper-triangular Cholesky factor R of the
        transformed and re-ordered Hessian matrix
   
                                T       T~
                               R R=H ==Q HQ,                          (6)
                                    Q
   
              ~
        where H is the Hessian H with rows and columns permuted so that
        the free variables are first, and Q is the n by n matrix
   
                                   (Q    )
                                   ( FR  )
                                 Q=(  I  ),                           (7)
                                   (   FX)
   
        with I   the identity matrix of order n  . If the columns of Q
              FX                               FX                     FR
        are partitioned so that
   
                                   Q  =(Z Y),
                                    FR
   
        the n (n ==n  -m) columns of Z form a basis for the null space of
             z  z   FR
                                                                     T
        C  . The matrix Z is used to compute the projected gradient Z g
         FR                                                            FR
        at the current iterate. (The values Nz, Norm Gf and Norm Gz
                                                            T
        printed by E04UCF give n  and the norms of g   and Z g  .)
                                z                   FR        FR
   
        A theoretical characteristic of SQP methods is that the predicted
        active set from the QP subproblem (4) is identical to the correct
                                          *
        active set in a neighbourhood of x . In E04UCF, this feature is
        exploited by using the QP active set from the previous iteration
        as a prediction of the active set for the next QP subproblem,
        which leads in practice to optimality of the subproblems in only
        one iteration as the solution is approached. Separate treatment
        of bound and linear constraints in E04UCF also saves computation
        in factorizing C   and H .
                        FR      Q
   
        Once p has been computed, the major iteration proceeds by
        determining a step length (alpha) that produces a 'sufficient
        decrease' in an augmented Lagrangian merit function (see Section
        8.2). Finally, the approximation to the transformed Hessian
        matrix H  is updated using a modified BFGS quasi-Newton update
                Q
        (see Section 8.3) to incorporate new curvature information
                                       _
        obtained in the move from x to x.
   
        On entry to E04UCF, an iterative procedure from E04NCF(*) is
        executed, starting with the user-provided initial point, to find
        a point that is feasible with respect to the bounds and linear
        constraints (using the tolerance specified by Linear Feasibility
        Tolerance see Section 5.1). If no feasible point exists for the
        bound and linear constraints, (1) has no solution and E04UCF
        terminates. Otherwise, the problem functions will thereafter be
        evaluated only at points that are feasible with respect to the
        bounds and linear constraints. The only exception involves
        variables whose bounds differ by an amount comparable to the
        finite-difference interval (see the discussion of Difference
        Interval in Section 5.1). In contrast to the bounds and linear
        constraints, it must be emphasised that the nonlinear constraints
        will not generally be satisfied until an optimal point is
        reached.
   
        Facilities are provided to check whether the user-provided
        gradients appear to be correct (see the optional parameter Verify
        in Section 5.1). In general, the check is provided at the first
        point that is feasible with respect to the linear constraints and
        bounds. However, the user may request that the check be performed
        at the initial point.
   
        In summary, the method of E04UCF first determines a point that
        satisfies the bound and linear constraints. Thereafter, each
        iteration includes:
   
        (a)   the solution of a quadratic programming subproblem;
   
        (b)   a linesearch with an augmented Lagrangian merit function;
              and
   
        (c)   a quasi-Newton update of the approximate Hessian of the
              Lagrangian function.
   
        These three procedures are described in more detail in Section 8.
   
        4. References
   
        [1]   Dennis J E Jr and More J J (1977) Quasi-Newton Methods,
              Motivation and Theory. SIAM Review. 19 46--89.
   
        [2]   Dennis J E Jr and Schnabel R B (1981) A New Derivation of
              Symmetric Positive-Definite Secant Updates. Nonlinear
              Programming 4. (ed O L Mangasarian, R R Meyer and S M.
              Robinson) Academic Press. 167--199.
   
        [3]   Dennis J E Jr and Schnabel R B (1983) Numerical Methods for
              Unconstrained Optimixation and Nonlinear Equations.
              Prentice-Hall.
   
        [4]   Dongarra J J, Du Croz J J, Hammarling S and Hanson R J
              (1985) A Proposal for an Extended set of Fortran Basic
              Linear Algebra Subprograms. SIGNUM Newsletter. 20 (1) 2--18.
   
        [5]   Fletcher R (1981) Practical Methods of Optimization, Vol 2.
              Constrained Optimization. Wiley.
   
        [6]   Gill P E, Murray W, Saunders M A and Wright M H (1984)
              User's Guide for SOL/QPSOL Version 3.2. Report SOL 84-5.
              Department of Operations Research, Stanford University.
   
        [7]   Gill P E, Murray W, Saunders M A and Wright M H (1984)
              Procedures for Optimization Problems with a Mixture of
              Bounds and General Linear Constraints. ACM Trans. Math.
              Softw. 10 282--298.
   
        [8]   Gill P E, Hammarling S, Murray W, Saunders M A and Wright M
              H (1986) User's Guide for LSSOL (Version 1.0). Report SOL
              86-1. Department of Operations Research, Stanford
              University.
   
        [9]   Gill P E, Murray W, Saunders M A and Wright M H (1986) Some
              Theoretical Properties of an Augmented Lagrangian Merit
              Function. Report SOL 86-6R. Department of Operations
              Research, Stanford University.
   
        [10]  Gill P E, Murray W and Wright M H (1981) Practical
              Optimization. Academic Press.
   
        [11]  Hock W and Schittkowski K (1981) Test Examples for Nonlinear
              Programming Codes. Lecture Notes in Economics and
              Mathematical Systems. 187 Springer-Verlag.
   
        [12]  Lawson C L, Hanson R J, Kincaid D R and Krogh F T (1979)
              Basic Linear Algebra Subprograms for Fortran Usage. ACM
              Trans. Math. Softw. 5 308--325.
   
        [13]  Murtagh B A and Saunders M A (1983) MINOS 5.0 User's Guide.
              Report SOL 83-20. Department of Operations Research,
              Stanford University.
   
        [14]  Powell M J D (1974) Introduction to Constrained
              Optimization. Numerical Methods for Constrained
              Optimization. (ed P E Gill and W Murray) Academic Press. 1--
              28.
   
        [15]  Powell M J D (1983) Variable Metric Methods in Constrained
              Optimization. Mathematical Programming: The State of the
              Art. (ed A Bachem, M Groetschel and B Korte) Springer-
              Verlag. 288--311.
   
        5. Parameters
   
         1:  N -- INTEGER                                           Input
             On entry: the number, n, of variables in the problem.
             Constraint: N > 0.
   
         2:  NCLIN -- INTEGER                                       Input
             On entry: the number, n , of general linear constraints in
                                    L
             the problem. Constraint: NCLIN >= 0.
   
         3:  NCNLN -- INTEGER                                       Input
             On entry: the number, n , of nonlinear constraints in the
                                    N
             problem. Constraint: NCNLN >= 0.
   
         4:  NROWA -- INTEGER                                       Input
             On entry:
             the first dimension of the array A as declared in the
             (sub)program from which E04UCF is called.
             Constraint: NROWA >= max(1,NCLIN).
   
         5:  NROWJ -- INTEGER                                       Input
             On entry:
             the first dimension of the array CJAC as declared in the
             (sub)program from which E04UCF is called.
             Constraint: NROWJ >= max(1,NCNLN).
   
         6:  NROWR -- INTEGER                                       Input
             On entry:
             the first dimension of the array R as declared in the
             (sub)program from which E04UCF is called.
             Constraint: NROWR >= N.
   
         7:  A(NROWA,*) -- DOUBLE PRECISION array                   Input
             The second dimension of the array A must be >= N for NCLIN >
             0. On entry: the ith row of the array A must contain the ith
             row of the matrix A  of general linear constraints in (1).
                                L
             That is, the ith row contains the coefficients of the ith
             general linear constraint, for i = 1,2,...,NCLIN.
   
             If NCLIN = 0 then the array A is not referenced.
   
         8:  BL(N+NCLIN+NCNLN) -- DOUBLE PRECISION array            Input
             On entry: the lower bounds for all the constraints, in the
             following order. The first n elements of BL must contain the
             lower bounds on the variables. If NCLIN > 0, the next n
                                                                    L
             elements of BL must contain the lower bounds on the general
             linear constraints. If NCNLN > 0, the next n  elements of BL
                                                         N
             must contain the lower bounds for the nonlinear constraints.
             To specify a non-existent lower bound (i.e., l =-infty), the
                                                           j
             value used must satisfy BL(j)<=-BIGBND, where BIGBND is the
             value of the optional parameter Infinite Bound Size whose
                                10
             default value is 10   (see Section 5.1). To specify the jth
             constraint as an equality, the user must set BL(j) = BU(j) =
             (beta), say, where |(beta)|<BIGBND. Constraint: BL(j) <= BU(
             j), for j=1,2,...,N+NCLIN+NCNLN.
   
         9:  BU(N+NCLIN+NCNLN) -- DOUBLE PRECISION array            Input
             On entry: the upper bounds for all the constraints in the
             following order. The first n elements of BU must contain the
             upper bounds on the variables. If NCLIN > 0, the next n
                                                                    L
             elements of BU must contain the upper bounds on the general
             linear constraints. If NCNLN > 0, the next n  elements of BU
                                                         N
             must contain the upper bounds for the nonlinear constraints.
             To specify a non-existent upper bound (i.e., u =+infty), the
                                                           j
             value used must satisfy BU(j) >= BIGBND, where BIGBND is the
             value of the optional parameter Infinite Bound Size, whose
                                10
             default value is 10   (see Section 5.1). To specify the jth
             constraint as an equality, the user must set BU(j) = BL(j) =
             (beta), say, where |(beta)| < BIGBND. Constraint: BU(j) >=
             BL(j), for j=1,2,...,N+NCLIN+NCNLN.
   
        10:  CONFUN -- SUBROUTINE, supplied by the user.
                                                       External Procedure
             CONFUN must calculate the vector c(x) of nonlinear
             constraint functions and (optionally) its Jacobian for a
             specified n element vector x. If there are no nonlinear
             constraints (NCNLN=0), CONFUN will never be called by E04UCF
             and CONFUN may be the dummy routine E04UDM. (E04UDM is
             included in the NAG Foundation Library and so need not be
             supplied by the user. Its name may be implementation-
             dependent: see the Users' Note for your implementation for
             details.) If there are nonlinear constraints, the first call
             to CONFUN will occur before the first call to OBJFUN.
   
             Its specification is:
   
                    SUBROUTINE CONFUN (MODE, NCNLN, N, NROWJ, NEEDC,
                   1                   X, C, CJAC, NSTATE, IUSER,
                   2                   USER)
                    INTEGER          MODE, NCNLN, N, NROWJ, NEEDC
                   1                 (NCNLN), NSTATE, IUSER(*)
                    DOUBLE PRECISION X(N), C(NCNLN), CJAC(NROWJ,N),
                   1                 USER(*)
   
              1:  MODE -- INTEGER                            Input/Output
                  On entry: MODE indicates the values that must be
                  assigned during each call of CONFUN. MODE will always
                  have the value 2 if all elements of the Jacobian are
                  available, i.e., if Derivative Level is either 2 or 3
                  (see Section 5.1). If some elements of CJAC are
                  unspecified, E04UCF will call CONFUN with MODE = 0, 1,
                  or 2:
   
                  If MODE = 2, only the elements of C corresponding to
                  positive values of NEEDC must be set (and similarly for
                  the available components of the rows of CJAC).
   
                  If MODE = 1, the available components of the rows of
                  CJAC corresponding to positive values in NEEDC must be
                  set. Other rows of CJAC and the array C will be
                  ignored.
   
                  If MODE = 0, the components of C corresponding to
                  positive values in NEEDC must be set. Other components
                  and the array CJAC are ignored. On exit: MODE may be
                  set to a negative value if the user wishes to terminate
                  the solution to the current problem. If MODE is
                  negative on exit from CONFUN then E04UCF will terminate
                  with IFAIL set to MODE.
   
              2:  NCNLN -- INTEGER                                  Input
                  On entry: the number, n , of nonlinear constraints.
                                         N
   
              3:  N -- INTEGER                                      Input
                  On entry: the number, n, of variables.
   
              4:  NROWJ -- INTEGER                                  Input
                  On entry: the first dimension of the array CJAC.
   
              5:  NEEDC(NCNLN) -- INTEGER array                     Input
                  On entry: the indices of the elements of C or CJAC that
                  must be evaluated by CONFUN. If NEEDC(i)>0 then the ith
                  element of C and/or the ith row of CJAC (see parameter
                  MODE above) must be evaluated at x.
   
              6:  X(N) -- DOUBLE PRECISION array                    Input
                  On entry: the vector x of variables at which the
                  constraint functions are to be evaluated.
   
              7:  C(NCNLN) -- DOUBLE PRECISION array               Output
                  On exit: if NEEDC(i)>0 and MODE = 0 or 2, C(i) must
                  contain the value of the ith constraint at x. The
                  remaining components of C, corresponding to the non-
                  positive elements of NEEDC, are ignored.
   
              8:  CJAC(NROWJ,N) -- DOUBLE PRECISION array          Output
                  On exit: if NEEDC(i)>0 and MODE = 1 or 2, the ith row
                  of CJAC must contain the available components of the
                  vector (nabla)c  given by
                                 i
                                      ( ddc   ddc       ddc )
                                      (    i     i         i)T
                            (nabla)c =( ----, ----,..., ----) ,
                                    i ( ddx   ddx       ddx )
                                      (    1     2         n)
                         ddc
                            i
                  where  ---- is the partial derivative of the ith
                         ddx
                            j
                  constraint with respect to the jth variable, evaluated
                  at the point x. See also the parameter NSTATE below.
                  The remaining rows of CJAC, corresponding to non-
                  positive elements of NEEDC, are ignored.
   
                  If all constraint gradients (Jacobian elements) are
                  known (i.e., Derivative Level = 2 or 3; see Section 5.1)
                  any constant elements may be assigned to CJAC one
                  time only at the start of the optimization. An element
                  of CJAC that is not subsequently assigned in CONFUN
                  will retain its initial value throughout. Constant
                  elements may be loaded into CJAC either before the call
                  to E04UCF or during the first call to CONFUN (signalled
                  by the value NSTATE = 1). The ability to preload
                  constants is useful when many Jacobian elements are
                  identically zero, in which case CJAC may be initialised
                  to zero and non-zero elements may be reset by CONFUN.
   
                  Note that constant non-zero elements do affect the
                  values of the constraints. Thus, if CJAC(i,j) is set to
                  a constant value, it need not be reset in subsequent
                  calls to CONFUN, but the value CJAC(i,j)*X(j) must
                  nonetheless be added to C(i).
   
                  It must be emphasized that, if Derivative Level < 2,
                  unassigned elements of CJAC are not treated as
                  constant; they are estimated by finite differences, at
                  non-trivial expense. If the user does not supply a
                  value for Difference Interval (see Section 5.1), an
                  interval for each component of x is computed
                  automatically at the start of the optimization. The
                  automatic procedure can usually identify constant
                  elements of CJAC, which are then computed once only by
                  finite differences.
   
              9:  NSTATE -- INTEGER                                 Input
                  On entry: if NSTATE = 1 then E04UCF is calling CONFUN
                  for the first time. This parameter setting allows the
                  user to save computation time if certain data must be
                  read or calculated only once.
   
             10:  IUSER(*) -- INTEGER array                User Workspace
   
             11:  USER(*) -- DOUBLE PRECISION array        User Workspace
                  CONFUN is called from E04UCF with the parameters IUSER
                  and USER as supplied to E04UCF. The user is free to use
                  the arrays IUSER and USER to supply information to
                  CONFUN as an alternative to using COMMON.
             CONFUN must be declared as EXTERNAL in the (sub)program
             from which E04UCF is called. Parameters denoted as
             Input must not be changed by this procedure.
   
        11:  OBJFUN -- SUBROUTINE, supplied by the user.
                                                       External Procedure
             OBJFUN must calculate the objective function F(x) and
             (optionally) the gradient g(x) for a specified n element
             vector x.
   
             Its specification is:
   
                    SUBROUTINE OBJFUN (MODE, N, X, OBJF, OBJGRD,
                   1                   NSTATE, IUSER, USER)
                    INTEGER          MODE, N, NSTATE, IUSER(*)
                    DOUBLE PRECISION X(N), OBJF, OBJGRD(N), USER(*)
   
              1:  MODE -- INTEGER                            Input/Output
                  On entry: MODE indicates the values that must be
                  assigned during each call of OBJFUN.
   
                  MODE will always have the value 2 if all components of
                  the objective gradient are specified by the user, i.e.,
                  if Derivative Level is either 1 or 3. If some gradient
                  elements are unspecified, E04UCF will call OBJFUN with
                  MODE = 0, 1 or 2.
                       If MODE = 2, compute OBJF and the available
                       components of OBJGRD.
   
                       If MODE = 1, compute all available components of
                       OBJGRD; OBJF is not required.
   
                       If MODE = 0, only OBJF needs to be computed;
                       OBJGRD is ignored.
                  On exit: MODE may be set to a negative value if the
                  user wishes to terminate the solution to the current
                  problem. If MODE is negative on exit from OBJFUN, then
                  E04UCF will terminate with IFAIL set to MODE.
   
              2:  N -- INTEGER                                      Input
                  On entry: the number, n, of variables.
   
              3:  X(N) -- DOUBLE PRECISION array                    Input
                  On entry: the vector x of variables at which the
                  objective function is to be evaluated.
   
              4:  OBJF -- DOUBLE PRECISION                         Output
                  On exit: if MODE = 0 or 2, OBJF must be set to the
                  value of the objective function at x.
   
              5:  OBJGRD(N) -- DOUBLE PRECISION array              Output
                  On exit: if MODE = 1 or 2, OBJGRD must return the
                  available components of the gradient evaluated at x.
   
              6:  NSTATE -- INTEGER                                 Input
                  On entry: if NSTATE = 1 then E04UCF is calling OBJFUN
                  for the first time. This parameter setting allows the
                  user to save computation time if certain data must be
                  read or calculated only once.
   
              7:  IUSER(*) -- INTEGER array                User Workspace
   
              8:  USER(*) -- DOUBLE PRECISION array        User Workspace
                  OBJFUN is called from E04UCF with the parameters IUSER
                  and USER as supplied to E04UCF. The user is free to use
                  the arrays IUSER and USER to supply information to
                  OBJFUN as an alternative to using COMMON.
             OBJFUN must be declared as EXTERNAL in the (sub)program
             from which E04UCF is called. Parameters denoted as
             Input must not be changed by this procedure.
   
        12:  ITER -- INTEGER                                       Output
             On exit: the number of iterations performed.
   
        13:  ISTATE(N+NCLIN+NCNLN) -- INTEGER array          Input/Output
             On entry: ISTATE need not be initialised if E04UCF is called
             with (the default) Cold Start option. The ordering of ISTATE
             is as follows. The first n elements of ISTATE refer to the
             upper and lower bounds on the variables, elements n+1
             through n+n  refer to the upper and lower bounds on A x, and
                        L                                         L
             elements n+n +1 through n+n +n  refer to the upper and lower
                         L              L  N
             bounds on c(x). When a Warm Start option is chosen, the
             elements of ISTATE corresponding to the bounds and linear
             constraints define the initial working set for the procedure
             that finds a feasible point for the linear constraints and
             bounds. The active set at the conclusion of this procedure
             and the elements of ISTATE corresponding to nonlinear
             constraints then define the initial working set for the
             first QP subproblem. Possible values for ISTATE(j) are:
   
             ISTATE(j) Meaning
   
             0         The corresponding constraint is not in the initial
                       QP working set.
   
             1         This inequality constraint should be in the
                       working set at its lower bound.
   
             2         This inequality constraint should be in the
                       working set at its upper bound.
   
             3         This equality constraint should be in the initial
                       working set. This value must not be specified
                       unless BL(j) = BU(j). The values 1,2 or 3 all have
                       the same effect when BL(j) = BU(j).
             Note that if E04UCF has been called previously with the same
             values of N, NCLIN and NCNLN, ISTATE already contains
             satisfactory values. If necessary, E04UCF will override the
             user's specification of ISTATE so that a poor choice will
             not cause the algorithm to fail. On exit: with IFAIL = 0 or
             1, the values in the array ISTATE correspond to the active
             set of the final QP subproblem, and are a prediction of the
             status of the constraints at the solution of the problem.
             Otherwise, ISTATE indicates the composition of the QP
             working set at the final iterate. The significance of each
             possible value of ISTATE(j) is as follows:
             -2        This constraint violates its lower bound by more
                       than the appropriate feasibility tolerance (see
                       the optional parameters LinearFeasibility
                       Tolerance and Nonlinear Feasibility Tolerance in
                       Section 5.1). This value can occur only when no
                       feasible point can be found for a QP subproblem.
   
             -1        This constraint violates its upper bound by more
                       than the appropriate feasibility tolerance (see
                       the optional parameters Linearear Feasibility
                       Tolerance and Nonlinear Feasibility Tolerance in
                       Section 5.1). This value can occur only when no
                       feasible point can be found for a QP subproblem.
   
             0         The constraint is satisfied to within the
                       feasibility tolerance, but is not in the working
                       set.
   
             1         This inequality constraint is included in the QP
                       working set at its upper bound.
   
             2         This inequality constraint is included in the QP
                       working set at its upper bound.
   
             3         This constraint is included in the QP working set
                       as an equality. This value of ISTATE can occur
                       only when BL(j) = BU(j).
   
        14:  C(*) -- DOUBLE PRECISION array                        Output
             Note: the dimension of the array C must be at least
             max(1,NCNLN).
             On exit: if NCNLN > 0, C(i) contains the value of the ith
             nonlinear constraint function c  at the final iterate, for
                                            i
             i=1,2,...,NCNLN. If NCNLN = 0, then the array C is not
             referenced.
   
        15:  CJAC(NROWJ,*) -- DOUBLE PRECISION array         Input/Output
             Note: the second dimension of the array CJAC must be at
             least N for NCNLN >0 and 1 otherwise On entry: in general,
             CJAC need not be initialised before the call to E04UCF.
             However, if Derivative Level = 3, the user may optionally
             set the constant elements of CJAC (see parameter NSTATE in
             the description of CONFUN). Such constant elements need not
             be re-assigned on subsequent calls to CONFUN. If NCNLN = 0,
             then the array CJAC is not referenced. On exit: if NCNLN >
             0, CJAC contains the Jacobian matrix of the nonlinear
             constraint functions at the final iterate, i.e., CJAC(i,j)
             contains the partial derivative of the ith constraint
             function with respect to the jth variable, for i=1,2,...,
             NCNLN; j = 1,2,...,N. (See the discussion of parameter CJAC
             under CONFUN.)
   
        16:  CLAMDA(N+NCLIN+NCNLN) -- DOUBLE PRECISION array Input/Output
             On entry: CLAMDA need not be initialised if E04UCF is called
             with the (default) Cold Start option. With the Warm Start
             option, CLAMDA must contain a multiplier estimate for each
             nonlinear constraint with a sign that matches the status of
             the constraint specified by the ISTATE array (as above). The
             ordering of CLAMDA is as follows; the first n elements
             contain the multipliers for the bound constraints on the
             variables, elements n+1 through n+n  contain the multipliers
                                                L
             for the general linear constraints, and elements n+n +1
                                                                 L
             through n+n +n  contain the multipliers for the nonlinear
                        L  N
             constraints. If the jth constraint is defined as 'inactive'
             by the initial value of the ISTATE array, CLAMDA(j) should
             be zero; if the jth constraint is an inequality active at
             its lower bound, CLAMDA(j) should be non-negative; if the j
             th constraint is an inequality active at its upper bound,
             CLAMDA(j) should be non-positive. On exit: the values of the
             QP multipliers from the last QP subproblem. CLAMDA(j) should
             be non-negative if ISTATE(j) = 1 and non-positive if ISTATE(
             j) = 2.
   
        17:  OBJF -- DOUBLE PRECISION                              Output
             On exit: the value of the objective function, F(x), at the
             final iterate.
   
        18:  OBJGRD(N) -- DOUBLE PRECISION array                   Output
             On exit: the gradient (or its finite-difference
             approximation) of the objective function at the final
             iterate.
   
        19:  R(NROWR,N) -- DOUBLE PRECISION array            Input/Output
             On entry: R need not be initialised if E04UCF is called with
             a Cold Start option (the default), and will be taken as the
             identity. With a Warm Start R must contain the upper-
             triangular Cholesky factor R of the initial approximation of
             the Hessian of the Lagrangian function, with the variables
             in the natural order. Elements not in the upper-triangular
             part of R are assumed to be zero and need not be assigned.
             On exit: if Hessian = No, (the default; see Section 5.1), R
                                                                 T~
             contains the upper-triangular Cholesky factor R of Q HQ, an
             estimate of the transformed and re-ordered Hessian of the
             Lagrangian at x (see (6) in Section 3). If Hessian = Yes, R
             contains the upper-triangular Cholesky factor R of H, the
             approximate (untransformed) Hessian of the Lagrangian, with
             the variables in the natural order.
   
        20:  X(N) -- DOUBLE PRECISION array                  Input/Output
             On entry: an initial estimate of the solution. On exit: the
             final estimate of the solution.
   
        21:  IWORK(LIWORK) -- INTEGER array                     Workspace
   
        22:  LIWORK -- INTEGER                                      Input
             On entry:
             the dimension of the array IWORK as declared in the
             (sub)program from which E04UCF is called.
             Constraint: LIWORK>=3*N+NCLIN+2*NCNLN.
   
        23:  WORK(LWORK) -- DOUBLE PRECISION array              Workspace
   
        24:  LWORK -- INTEGER                                       Input
             On entry:
             the dimension of the array WORK as declared in the
             (sub)program from which E04UCF is called.
             Constraints:
             if NCLIN = NCNLN = 0 then
                   LWORK >=20*N
   
             if NCNLN = 0 and NCLIN > 0 then
                              2
                   LWORK >=2*N +20*N+11*NCLIN
   
             if NCNLN > 0 and NCLIN >= 0 then
                             2
                   LWORK>=2*N +N*NCLIN+20*N*NCNLN+20*N+ 11*NCLIN+21*NCNLN
   
             If Major Print Level > 0, the required amounts of workspace
             are output on the current advisory message channel (see
             X04ABF). As an alternative to computing LIWORK and LWORK
             from the formulas given above, the user may prefer to obtain
             appropriate values from the output of a preliminary run with
             a positive value of Major Print Level and LIWORK and LWORK
             set to 1. (E04UCF will then terminate with IFAIL = 9.)
   
        25:  IUSER(*) -- INTEGER array                     User Workspace
             Note: the dimension of the array IUSER must be at least 1.
             IUSER is not used by E04UCF, but is passed directly to
             routines CONFUN and OBJFUN and may be used to pass
             information to those routines.
   
        26:  USER(*) -- DOUBLE PRECISION array             User Workspace
             Note: the dimension of the array USER must be at least 1.
             USER is not used by E04UCF, but is passed directly to
             routines CONFUN and OBJFUN and may be used to pass
             information to those routines.
   
        27:  IFAIL -- INTEGER                                Input/Output
             On entry: IFAIL must be set to 0, -1 or 1. Users who are
             unfamiliar with this parameter should refer to the Essential
             Introduction for details.
   
             On exit: IFAIL = 0 unless the routine detects an error or
             gives a warning (see Section 6).
   
             For this routine, because the values of output parameters
             may be useful even if IFAIL /=0 on exit, users are
             recommended to set IFAIL to -1 before entry. It is then
             essential to test the value of IFAIL on exit.
   
             E04UCF returns with IFAIL = 0 if the iterates have
             converged to a point x that satisfies the first-order Kuhn-
             Tucker conditions to the accuracy requested by the optional
             parameter Optimality Tolerance (see Section 5.1), i.e., the
             projected gradient and active constraint residuals are
             negligible at x.
   
             The user should check whether the following four conditions
             are satisfied:
             (i)   the final value of Norm Gz is significantly less than
                   that at the starting point;
   
             (ii)  during the final major iterations, the values of Step
                   and ItQP are both one;
   
             (iii) the last few values of both Norm Gz and Norm C become
                   small at a fast linear rate;
   
             (iv)  Cond Hz is small.
             If all these conditions hold, x is almost certainly a local
             minimum of (1). (See Section 9 for a specific example.)
   
        5.1. Optional Input Parameters
   
        Several optional parameters in E04UCF define choices in the
        behaviour of the routine. In order to reduce the number of formal
        parameters of E04UCF these optional parameters have associated
        default values (see Section 5.1.3) that are appropriate for most
        problems. Therefore the user need only specify those optional
        parameters whose values are to be different from their default
        values.
   
        The remainder of this section can be skipped by users who wish to
        use the default values for all optional prameters. A complete
        list of optional parameters and their default values is given in
        Section 5.1.3
   
        5.1.1. Specification of the optional parameters
   
        Optional parameters may be specified by calling one, or both, of
        E04UDF and E04UEF prior to a call to E04UCF.
   
        E04UDF reads options from an external options file, with Begin
        and End as the first and last lines respectively and each
        intermediate line defining a single optional parameter. For
        example,
   
              Begin
                Print Level = 1
              End
   
        The call
   
              CALL E04UDF (IOPTNS, INFORM)
   
        can then be used to read the file on unit IOPTNS. INFORM will be
        zero on successful exit. E04UDF should be consulted for a full
        description of this method of supplying optional parameters.
   
        E04UEF can be called directly to supply options, one call being
        necessary for each optional parameter. For example,
   
              CALL E04UEF (`Print level = 1')
   
        E04UEF should be consulted for a full description of this method
        of supplying optional parameters.
   
        All optional parameters not specified by the user are set to
        their default values. Optional parameters specified by the user
        are unaltered by E04UCF (unless they define invalid values) and
        so remain in effect for subsequent calls to E04UCF, unless
        altered by the user.
   
        5.1.2. Description of the optional parameters
   
        The following list (in alphabetical order) gives the valid
        options. For each option, we give the keyword, any essential
        optional qualifiers, the default value, and the definition. The
        minimum valid abbreviation of each keyword is underlined. If no
        characters of an optional qualifier are underlined, the qualifier
        may be omitted. The letter a denotes a phrase (character string)
        that qualifies an option. The letters i and r denote INTEGER and
        DOUBLE PRECISION values required with certain options. The number
        (epsilon) is a generic notation for machine precision (see
        X02AJF(*) ), and (epsilon)  denotes the relative precision of the
                                  R
        objective function (the optional parameter Function Precision see
        below).
   
        Central Difference Interval r Default values are computed
   
        If the algorithm switches to central differences because the
        forward-difference approximation is not sufficiently accurate,
        the value of r is used as the difference interval for every
        component of x. The use of finite-differences is discussed
        further below under the optional parameter Difference Interval.
   
        Cold Start Default = Cold Start
   
        Warm Start
   
           (AXIOM parameter STA, warm start when .TRUE.)
   
        This option controls the specification of the initial working set
        in both the procedure for finding a feasible point for the linear
        constraints and bounds, and in the first QP subproblem
        thereafter. With a Cold Start, the first working set is chosen by
        E04UCF based on the values of the variables and constraints at
        the initial point. Broadly speaking, the initial working set will
        include equality constraints and bounds or inequality constraints
        that violate or 'nearly' satisfy their bounds (within Crash
        Tolerance; see below). With a Warm Start, the user must set the
        ISTATE array and define CLAMDA and R as discussed in Section 5.
        ISTATE values associated with bounds and linear constraints
        determine the initial working set of the procedure to find a
        feasible point with respect to the bounds and linear constraints.
        ISTATE values associated with nonlinear constraints determine the
        initial working set of the first QP subproblem after such a
        feasible point has been found. E04UCF will override the user's
        specification of ISTATE if necessary, so that a poor choice of
        the working set will not cause a fatal error. A warm start will
        be advantageous if a good estimate of the initial working set is
        available - for example, when E04UCF is called repeatedly to
        solve related problems.
   
        Crash Tolerance r Default = 0.01
   
           (AXIOM parameter CRA)
   
        This value is used in conjunction with the optional parameter
        Cold Start (the default value). When making a cold start, the QP
        algorithm in E04UCF must select an initial working set. When r>=0
        , the initial working set will include (if possible) bounds or
        general inequality constraints that lie within r of their bounds.
                                                 T
        In particular, a constraint of the form a x>=l will be included
                                                 j
                                        T
        in the initial working set if |a x-l|<=r(1+|l|). If r<0 or r>1,
                                        j
        the default value is used.
   
        Defaults
   
        This special keyword may be used to reset the default values
        following a call to E04UCF.
   
        Derivative Level i Default = 3
   
           (AXIOM parameter DER)
   
        This parameter indicates which derivatives are provided by the
        user in subroutines OBJFUN and CONFUN. The possible choices for i
        are the following.
   
            i    Meaning
   
            3    All objective and constraint gradients are provided by
                 the user.
   
            2    All of the Jacobian is provided, but some components of
                 the objective gradient are not specified by the user.
   
            1    All elements of the objective gradient are known, but
                 some elements of the Jacobian matrix are not specified
                 by the user.
   
            0    Some elements of both the objective gradient and the
                 Jacobian matrix are not specified by the user.
   
        The value i=3 should be used whenever possible, since E04UCF is
        more reliable and will usually be more efficient when all
        derivatives are exact.
   
        If i=0 or 2, E04UCF will estimate the unspecified components of
        the objective gradient, using finite differences. The computation
        of finite-difference approximations usually increases the total
        run-time, since a call to OBJFUN is required for each unspecified
        element. Furthermore, less accuracy can be attained in the
        solution (see Chapter 8 of Gill et al [10], for a discussion of
        limiting accuracy).
   
        If i=0 or 1, E04UCF will approximate unspecified elements of the
        Jacobian. One call to CONFUN is needed for each variable for
        which partial derivatives are not available. For example, if the
        Jacobian has the form
   
                                    (* * * *)
                                    (* ? ? *)
                                    (* * ? *)
                                    (* * * *)
   
        where '*' indicates an element provided by the user and '?'
        indicates an unspecified element, E04UCF will call CONFUN twice:
        once to estimate the missing element in column 2, and again to
        estimate the two missing elements in column 3. (Since columns 1
        and 4 are known, they require no calls to CONFUN.)
   
        At times, central differences are used rather than forward
        differences, in which case twice as many calls to OBJFUN and
        CONFUN are needed. (The switch to central differences is not
        under the user's control.)
   
        Difference Interval r Default values are computed
   
           (AXIOM parameter DIF)
   
        This option defines an interval used to estimate gradients by
        finite differences in the following circumstances:
   
        (a)   For verifying the objective and/or constraint gradients
              (see the description of Verify, below).
   
        (b)   For estimating unspecified elements of the objective
              gradient of the Jacobian matrix.
   
        In general, a derivative with respect to the jth variable is
                                                                      ^
        approximated using the interval (delta) , where (delta) =r(1+|x |)
                                               j               j       j
                ^
        with x the first point feasible with respect to the bounds and
        linear constraints. If the functions are well scaled, the
        resulting derivative approximation should be accurate to O(r).
   
        See Gill et al [10] for a discussion of the accuracy in finite-
        difference approximations.
   
        If a difference interval is not specified by the user, a finite-
        difference interval will be computed automatically for each
        variable by a procedure that requires up to six calls of CONFUN
        and OBJFUN for each component. This option is recommended if the
        function is badly scaled or the user wishes to have E04UCF
        determine constant elements in the objective and constraint
        gradients (see the descriptions of CONFUN and OBJFUN in
        Section 5).
   
                                            _________
        Feasibility Tolerance r Default = \/(epsilon)
   
           (AXIOM parameter FEA)
   
        The scalar r defines the maximum acceptable absolute violations
        in linear and nonlinear constraints at a 'feasible' point; i.e.,
        a constraint is considered satisfied if its violation does not
        exceed r. If r<(epsilon) or r>=1, the default value is used.
        Using this keyword sets both optional parameters Linear
        Feasibility Tolerance and Nonlinear Feasibility Tolerance to r,
        if (epsilon)<=r<1. (Additional details are given below under the
        descriptions of these parameters.)
   
                                                0.9
        Function Precision r Default = (epsilon)
   
           (AXIOM parameter FUN)
   
        This parameter defines (epsilon) , which is intended to be a
                                        R
        measure of the accuracy with which the problem functions f and c
        can be computed. If r<(epsilon) or r>=1, the default value is
        used. The value of (epsilon)  should reflect the relative
                                    R
        precision of 1+|F(x)|; i.e., (epsilon)  acts as a relative
                                              R
        precision when |F| is large, and as an absolute precision when
        |F| is small. For example, if F(x) is typically of order 1000 and
        the first six significant digits are known to be correct, an
        appropriate value for (epsilon)  would be 1.0E-6. In contrast, if
                                       R
                                     -4
        F(x) is typically of order 10   and the first six significant
        digits are known to be correct, an appropriate value for
        (epsilon)  would be 1.0E-10. The choice of (epsilon)  can be
                 R                                          R
        quite complicated for badly scaled problems; see Chapter 8 of
        Gill et al [10] for a discussion of scaling techniques. The
        default value is appropriate for most simple functions that are
        computed with full accuracy. However, when the accuracy of the
        computed function values is known to be significantly worse than
        full precision, the value of (epsilon)  should be large enough so
                                              R
        that E04UCF will not attempt to distinguish between function
        values that differ by less than the error inherent in the
        calculation.
   
        Hessian No Default = No
   
        Hessian Yes
   
           (No AXIOM parameter - fixed as Yes)
   
        This option controls the contents of the upper-triangular matrix
        R (see Section 5). E04UCF works exclusively with the transformed
        and re-ordered Hessian H  (6), and hence extra computation is
                                Q
        required to form the Hessian itself. If Hessian = No, R contains
        the Cholesky factor of the transformed and re-ordered Hessian. If
        Hessian = Yes the Cholesky factor of the approximate Hessian
        itself is formed and stored in R. The user should select Hessian
        = Yes if a warm start will be used for the next call to E04UCF.
   
                                          10
        Infinite Bound Size r Default = 10
   
           (AXIOM parameter INFB)
   
        If r>0, r defines the 'infinite' bound BIGBND in the definition
        of the problem constraints. Any upper bound greater than or equal
        to BIGBND will be regarded as plus infinity (and similarly for a
        lower bound less than or equal to -BIGBND). If r<=0, the default
        value is used.
   
                                                    10
        Infinite Step Size r Default = max(BIGBND,10  )
   
           (AXIOM parameter INFS)
   
        If r>0, r specifies the magnitude of the change in variables that
        is treated as a step to an unbounded solution. If the change in x
        during an iteration would exceed the value of Infinite Step Size,
        the objective function is considered to be unbounded below in the
        feasible region. If r<=0, the default value is used.
   
        Iteration limit i Default = max(50,3(n+n )+10n )
                                                L     N
   
        See Major Iteration Limit below.
   
                                                    _________
        Linear Feasibility Tolerance r  Default = \/(epsilon)
                                      1
   
           (AXIOM parameter LINF)
   
                                                       _________
        Nonlinear Feasibility Tolerance r  Default = \/(epsilon) if
                                         2
   
           (AXIOM parameter NONF)
                                           0.33
        Derivative Level >= 2 and (epsilon)     otherwise
   
        The scalars r  and r  define the maximum acceptable absolute
                     1      2
        violations in linear and nonlinear constraints at a 'feasible'
        point; i.e., a linear constraint is considered satisfied if its
        violation does not exceed r , and similarly for a nonlinear
                                   1
        constraint and r . If r <(epsilon) or r >=1, the default value is
                        2      i               i
        used, for i=1,2.
   
        On entry to E04UCF, an iterative procedure is executed in order
        to find a point that satisfies the linear constraint and bounds
        on the variables to within the tolerance r . All subsequent
                                                  1
        iterates will satisfy the linear constraints to within the same
        tolerance (unless r  is comparable to the finite-difference
                           1
        interval).
   
        For nonlinear constraints, the feasibility tolerance r  defines
                                                              2
        the largest constraint violation that is acceptable at an optimal
        point. Since nonlinear constraints are generally not satisfied
        until the final iterate, the value of Nonlinear Feasibility
        Tolerance acts as a partial termination criterion for the
        iterative sequence generated by E04UCF (see the discussion of
        Optimality Tolerance).
   
        These tolerances should reflect the precision of the
        corresponding constraints. For example, if the variables and the
        coefficients in the linear constraints are of order unity, and
        the latter are correct to about 6 decimal digits, it would be
                                       - 6
        appropriate to specify r  as 10   .
                                1
   
        Linesearch Tolerance r Default = 0.9
   
           (AXIOM parameter LINT)
   
   
        The value r (0 <= r < 1) controls the accuracy with which the
        step (alpha) taken during each iteration approximates a minimum
        of the merit function along the search direction (the smaller the
        value of r, the more accurate the linesearch). The default value
        r=0.9 requests an inaccurate search, and is appropriate for most
        problems, particularly those with any nonlinear constraints.
   
        If there are no nonlinear constraints, a more accurate search may
        be appropriate when it is desirable to reduce the number of major
        iterations - for example, if the objective function is cheap to
        evaluate, or if a substantial number of gradients are
        unspecified.
   
        List Default = List
   
        Nolist
   
           (AXIOM parameter LIST)
   
        Normally each optional parameter specification is printed as it
        is supplied. Nolist may be used to suppress the printing and List
        may be used to restore printing.
   
        Major Iteration Limit i Default = max(50,3(n+n )+10n )
                                                      L     N
   
        Iteration Limit
   
        Iters
   
        Itns
   
           (AXIOM parameter MAJI)
   
        The value of i specifies the maximum number of major iterations
        allowed before termination. Setting i=0 and Major Print Level> 0
        means that the workspace needed will be computed and printed, but
        no iterations will be performed.
   
        Major Print level i Default = 10
   
        Print Level
   
           (AXIOM parameter MAJP)
   
        The value of i controls the amount of printout produced by the
        major iterations of E04UCF. (See also Minor Print level below.)
        The levels of printing are indicated below.
   
        i        Output
   
        0        No output.
   
        1        The final solution only.
   
        5        One line for each major iteration (no printout of the
                 final solution).
   
        >=10     The final solution and one line of output for each
                 iteration.
   
        >=20     At each major iteration, the objective function, the
                 Euclidean norm of the nonlinear constraint violations,
                 the values of the nonlinear constraints (the vector c),
                 the values of the linear constraints (the vector A x),
                                                                   L
                 and the current values of the variables (the vector x).
   
        >=30     At each major iteration, the diagonal elements of the
                 matrix T associated with the TQ factorization (5) of the
                 QP working set, and the diagonal elements of R, the
                 triangular factor of the transformed and re-ordered
                 Hessian (6).
   
        Minor Iteration Limit i Default = max(50,3(n+n +n ))
                                                      L  N
   
           (AXIOM parameter MINI)
   
        The value of i specifies the maximum number of iterations for the
        optimality phase of each QP subproblem.
   
        Minor Print Level i Default = 0
   
           (AXIOM parameter MINP)
   
        The value of i controls the amount of printout produced by the
        minor iterations of E04UCF, i.e., the iterations of the quadratic
        programming algorithm. (See also Major Print Level, above.) The
        following levels of printing are available.
   
        i        Output
   
        0        No output.
   
        1        The final QP solution.
   
        5        One line of output for each minor iteration (no printout
                 of the final QP solution).
   
        >=10     The final QP solution and one brief line of output for
                 each minor iteration.
   
        >=20     At each minor iteration, the current estimates of the QP
                 multipliers, the current estimate of the QP search
                 direction, the QP constraint values, and the status of
                 each QP constraint.
   
        >=30     At each minor iteration, the diagonal elements of the
                 matrix T associated with the TQ factorization (5) of the
                 QP working set, and the diagonal elements of the
                 Cholesky factor R of the transformed Hessian (6).
   
                                                      _________
        Nonlinear Feasibility Tolerance r Default = \/(epsilon)
   
        See Linear Feasibility Tolerance, above.
   
                                                  0.8
        Optimality Tolerance r Default = (epsilon)
   
           (AXIOM parameter OPT)
   
        The parameter r ((epsilon) <=r<1) specifies the accuracy to which
                                  R
        the user wishes the final iterate to approximate a solution of
        the problem. Broadly speaking, r indicates the number of correct
        figures desired in the objective function at the solution. For
                           - 6
        example, if r is 10    and E04UCF terminates successfully, the
        final value of F should have approximately six correct figures.
        If r<(epsilon)  or r>=1 the default value is used.
                      R
   
        E04UCF will terminate successfully if the iterative sequence of x
        -values is judged to have converged and the final point satisfies
        the first-order Kuhn-Tucker conditions (see Section 3). The
        sequence of iterates is considered to have converged at x if
   
                                        _
                       (alpha) ||p||<=\/r(1+||x||),                  (8a)
   
        where p is the search direction and (alpha) the step length from
        (3). An iterate is considered to satisfy the first-order
        conditions for a minimum if
   
                     T         _
                  ||Z g  ||<=\/r(1+max(1+|F(x)|,||g  ||))            (8b)
                       FR                          FR
   
        and
   
                          |res |<=ftol for all j,                    (8c)
                              j
   
               T
        where Z g   is the projected gradient (see Section 3), g   is the
                 FR                                             FR
        gradient of F(x) with respect to the free variables, res  is the
                                                                j
        violation of the jth active nonlinear constraint, and ftol is the
        Nonlinear Feasibility Tolerance.
   
        Step Limit r Default = 2.0
   
           (AXIOM parameter STE)
   
        If r>0, r specifies the maximum change in variables at the first
                                                              bx
        step of the linesearch. In some cases, such as F(x)=ae   or
               b
        F(x)=ax , even a moderate change in the components of x can lead
        to floating-point overflow. The parameter r is therefore used to
        encourage evaluation of the problem functions at meaningful
                                                           ~
        points. Given any major iterate x, the first point x at which F
        and c are evaluated during the linesearch is restricted so that
   
                               ~
                             ||x-x|| <=r(1+||x|| ).
                                    2           2
   
        The linesearch may go on and evaluate F and c at points further
        from x if this will result in a lower value of the merit
        function. In this case, the character L is printed at the end of
        the optional line of printed output, (see Section 5.2). If L is
        printed for most of the iterations, r should be set to a larger
        value.
   
        Wherever possible, upper and lower bounds on x should be used to
        prevent evaluation of nonlinear functions at wild values. The
        default value Step Limit = 2.0 should not affect progress on
        well-behaved functions, but values 0.1 or 0.01 may be helpful
        when rapidly varying functions are present. If a small value of
        Step Limit is selected, a good starting point may be required. An
        important application is to the class of nonlinear least-squares
        problems. If r<=0, the default value is used.
   
        Start Objective Check At Variable k Default = 1
   
           (AXIOM parameter STAO)
   
        Start Constraint Check At Variable k Default = 1
   
           (AXIOM parameter STAC)
   
        Stop Objective Check At Variable l Default = n
   
           (AXIOM parameter STOO)
   
        Stop Constraint Check At Variable l Default = n
   
           (AXIOM parameter STOC)
   
        These keywords take effect only if Verify Level > 0 (see below).
        They may be used to control the verification of gradient elements
        computed by subroutines OBJFUN and CONFUN. For example, if the
        first 30 components of the objective gradient appeared to be
        correct in an earlier run, so that only component 31 remains
        questionable, it is reasonable to specify Start Objective Check
        At Variable 31. If the first 30 variables appear linearly in the
        objective, so that the corresponding gradient elements are
        constant, the above choice would also be appropriate.
   
        Verify Level i Default = 0
   
        Verify No
   
        Verify Level - 1
   
        Verify Level 0
   
        Verify Objective Gradients
   
        Verify Level 1
   
        Verify Constraint Gradients
   
        Verify Level 2
   
        Verify
   
        Verify Yes
   
        Verify Gradients
   
        Verify Level 3
   
           (AXIOM parameter VE)
   
        These keywords refer to finite-difference checks on the gradient
        elements computed by the user-provided subroutines OBJFUN and
        CONFUN. (Unspecified gradient components are not checked.) It is
        possible to specify Verify Levels 0-3 in several ways, as
        indicated above. For example, the nonlinear objective gradient
        (if any) will be verified if either Verify Objective Gradients or
        Verify Level 1 is specified. Similarly, the objective and the
        constraint gradients will be verified if Verify Yes or Verify
        Level 3 or Verify is specified.
   
        If 0<=i<=3, gradients will be verified at the first point that
        satisfies the linear constraints and bounds. If i=0, only a '
        cheap' test will be performed, requiring one call to OBJFUN and
        one call to CONFUN. If 1<=i<=3, a more reliable (but more
        expensive) check will be made on individual gradient components,
        within the ranges specified by the Start and Stop keywords
        described above. A result of the form OK or BAD? is printed by
        E04UCF to indicate whether or not each component appears to be
        correct.
   
        If 10<=i<=13, the action is the same as for i - 10, except that
        it will take place at the user-specified initial value of x.
   
        We suggest that Verify Level 3 be specified whenever a new
        function routine is being developed.
   
        5.1.3. Optional parameter checklist and default values
   
        For easy reference, the following list shows all the valid
        keywords and their default values. The symbol (epsilon)
        represents the machine precision (see X02AJF(*) ).
   
        Optional Parameters      Default Values
   
   
   
        Central difference       Computed automatically
        interval
   
        Cold/Warm start          Cold start
   
        Crash tolerance          0.01
   
        Defaults
   
        Derivative level         3
   
        Difference interval      Computed automatically
   
                                   _________
        Feasibility tolerance    \/(epsilon)
   
                                          0.9
        Function precision       (epsilon)
   
        Hessian                  No
   
                                   10
        Infinite bound size      10
   
                                   10
        Infinite step size       10
   
                                   _________
        Linear feasibility       \/(epsilon)
        tolerance
   
        Linesearch tolerance     0.9
   
        List/Nolist              List
   
        Major iteration limit    max(50,3(n+n )+10n )
                                             L     N
   
        Major print level        10
   
        Minor iteration limit    max(50,3(n+n +n ))
                                             L  N
   
        Minor print level        0
   
                                   _________
        Nonlinear feasibility    \/(epsilon) if Derivative Level >= 2
        tolerance                                   0.33
                                 otherwise (epsilon)
   
                                          0.8
        Optimality tolerance     (epsilon)
                                          R
   
        Step limit               2.0
   
        Start objective check    1
   
        Start constraint check   1
   
        Stop objective check     n
   
        Stop constraint check    n
   
        Verify level             0
   
        5.2. Description of Printed Output
   
        The level of printed output from E04UCF is controlled by the user
        (see the description of Major Print Level and Minor Print Level
        in Section 5.1). If Minor Print Level > 0, output is obtained
        from the subroutines that solve the QP subproblem. For a detailed
        description of this information the reader should refer to
        E04NCF(*).
   
        When Major Print Level >= 5, the following line of output is
        produced at every major iteration of E04UCF. In all cases, the
        values of the quantities printed are those in effect on
        completion of the given iteration.
   
        Itn            is the iteration count.
   
        ItQP           is the sum of the iterations required by the
                       feasibility and optimality phases of the QP
                       subproblem. Generally, ItQP will be 1 in the later
                       iterations, since theoretical analysis predicts
                       that the correct active set will be identified
                       near the solution (see Section 3).
   
                       Note that ItQP may be greater than the Minor
                       Iteration Limit if some iterations are required
                       for the feasibility phase.
   
        Step           is the step (alpha) taken along the computed
                       search direction. On reasonably well-behaved
                       problems, the unit step will be taken as the
                       solution is approached.
   
        Nfun           is the cumulative number of evaluations of the
                       objective function needed for the linesearch.
                       Evaluations needed for the estimation of the
                       gradients by finite differences are not included.
                       Nfun is printed as a guide to the amount of work
                       required for the linesearch.
   
        Merit          is the value of the augmented Lagrangian merit
                       function (12) at the current iterate. This
                       function will decrease at each iteration unless it
                       was necessary to increase the penalty parameters
                       (see Section 8.2). As the solution is approached,
                       Merit will converge to the value of the objective
                       function at the solution.
   
                       If the QP subproblem does not have a feasible
                       point (signified by I at the end of the current
                       output line), the merit function is a large
                       multiple of the constraint violations, weighted by
                       the penalty parameters. During a sequence of major
                       iterations with infeasible subproblems, the
                       sequence of Merit values will decrease
                       monotonically until either a feasible subproblem
                       is obtained or E04UCF terminates with IFAIL = 3
                       (no feasible point could be found for the
                       nonlinear constraints).
   
                       If no nonlinear constraints are present (i.e.,
                       NCNLN = 0), this entry contains Objective, the
                       value of the objective function F(x). The
                       objective function will decrease monotonically to
                       its optimal value when there are no nonlinear
                       constraints.
   
        Bnd            is the number of simple bound constraints in the
                       predicted active set.
   
        Lin            is the number of general linear constraints in the
                       predicted active set.
   
        Nln            is the number of nonlinear constraints in the
                       predicted active set (not printed if NCNLN is
                       zero).
   
        Nz             is the number of columns of Z (see Section 8.1).
                       The value of Nz is the number of variables minus
                       the number of constraints in the predicted active
                       set; i.e., Nz = n-(Bnd + Lin + Nln).
   
        Norm Gf        is the Euclidean norm of g  , the gradient of the
                                                 FR
                       objective function with respect to the free
                       variables, i.e.,variables not currently held at a
                       bound.
   
                             T
        Norm Gz        is ||Z g  ||, the Euclidean norm of the projected
                               FR
                       gradient (see Section 8.1). Norm  Gz will be
                       approximately zero in the neighbourhood of a
                       solution.
   
        Cond H         is a lower bound on the condition number of the
                       Hessian approximation H.
   
        Cond Hz        is a lower bound on the condition number of the
                       projected Hessian approximation H  (
                                                        z
                            T      T
                       (H =Z H  Z=R R ; see (6) and (12) in Sections 3
                         z    FR   z z
                       and 8.1). The larger this number, the more
                       difficult the problem.
   
        Cond T         is a lower bound on the condition number of the
                       matrix of predicted active constraints.
   
        Norm C         is the Euclidean norm of the residuals of
                       constraints that are violated or in the predicted
                       active set (not printed if NCNLN is zero). Norm C
                       will be approximately zero in the neighbourhood of
                       a solution.
   
        Penalty        is the Euclidean norm of the vector of penalty
                       parameters used in the augumented Lagrangian merit
                       function (not printed if NCNLN is zero).
   
        Conv           is a three-letter indication of the status of the
                       three convergence tests (8a)-(8c) defined in the
                       description of the optional parameter Optimality
                       Tolerance in Section 5.1 Each letter is T if the
                       test is satisfied, and F otherwise. The three
                       tests indicate whether:
                       (a)   the sequence of iterates has converged;
   
                       (b)   the projected gradient (Norm  Gz) is
                             sufficiently small; and
   
                       (c)   the norm of the residuals of constraints in
                             the predicted active set (Norm  C) is small
                             enough.
                       If any of these indicators is F when E04UCF
                       terminates with IFAIL = 0, the user should check
                       the solution carefully.
   
        M              is printed if the Quasi-Newton update was modified
                       to ensure that the Hessian approximation is
                       positive-definite (see Section 8.3).
   
        I              is printed if the QP subproblem has no feasible
                       point.
   
        C              is printed if central differences were used to
                       compute the unspecified objective and constraint
                       gradients. If the value of Step is zero, the
                       switch to central differences was made because no
                       lower point could be found in the linesearch. (In
                       this case, the QP subproblem is resolved with the
                       central-difference gradient and Jacobian.) If the
                       value of Step is non-zero, central differences
                       were computed because Norm  Gz and Norm  C imply
                       that x is close to a Kuhn-Tucker point.
   
        L              is printed if the linesearch has produced a
                       relative change in x greater than the value
                       defined by the optional parameter Step Limit. If
                       this output occurs frequently during later
                       iterations of the run, Step Limit should be set to
                       a larger value.
   
        R              is printed if the approximate Hessian has been
                       refactorized. If the diagonal condition estimator
                       of R indicates that the approximate Hessian is
                       badly conditioned, the approximate Hessian is
                       refactorized using column interchanges. If
                       necessary, R is modified so that its diagonal
                       condition estimator is bounded.
   
        When Major Print Level = 1 or Major Print Level >= 10, the
        summary printout at the end of execution of E04UCF includes a
        listing of the status of every variable and constraint. Note that
        default names are assigned to all variables and constraints.
   
        The following describes the printout for each variable.
   
        Varbl          gives the name (V) and index j=1,2,...,n of the
                       variable.
   
        State          gives the state of the variable in the predicted
                       active set (FR if neither bound is in the active
                       set, EQ if a fixed variable, LL if on its lower
                       bound, UL if on its upper bound). If the variable
                       is predicted to lie outside its upper or lower
                       bound by more than the feasibility tolerance,
                       State will be ++ or -- respectively. (The latter
                       situation can occur only when there is no feasible
                       point for the bounds and linear constraints.)
   
        Value          is the value of the variable at the final
                       iteration.
   
        Lower bound    is the lower bound specified for the variable.
                       (None indicates that BL(j)<=- BIGBND.)
   
        Upper bound    is the upper bound specified for the variable.
                       (None indicates that BL(j)>=BIGBND.)
   
        Lagr Mult      is the value of the Lagrange-multiplier for the
                       associated bound constraint. This will be zero if
                       State is FR. If x is optimal, the multiplier
                       should be non-negative if State is LL, and non-
                       positive if State is UL.
   
        Residual       is the difference between the variable Value and
                       the nearer of its bounds BL(j) and BU(j).
   
        The printout for general constraints is the same as for
        variables, except for the following:
   
        L Con      is the name (L) and index i, for i = 1,2,...,NCLIN of
                   a linear constraint.
   
        N Con      is the name (N) and index i, for i = 1,2,...,NCNLN of
                   a nonlinear constraint.
   
        6. Error Indicators and Warnings
   
        Errors or warnings specified by the routine:
   
        If on entry IFAIL = 0 or -1, explanatory error messages are
        output on the current error message unit (as defined by X04AAF).
   
        The input data for E04UCF should always be checked (even if
        E04UCF terminates with IFAIL=0).
   
        Note that when Print Level>0, a short description of IFAIL is
        printed.
   
        Errors and diagnostics indicated by IFAIL, together with some
        recommendations for recovery are indicated below.
   
        IFAIL= 1
             The final iterate x satisfies the first-order Kuhn-Tucker
             conditions to the accuracy requested, but the sequence of
             iterates has not yet converged. E04UCF was terminated
             because no further improvement could be made in the merit
             function.
   
             This value of IFAIL may occur in several circumstances. The
             most common situation is that the user asks for a solution
             with accuracy that is not attainable with the given
             precision of the problem (as specified by Function Precision
             see Section 5). This condition will also occur if, by
             chance, an iterate is an 'exact' Kuhn-Tucker point, but the
             change in the variables was significant at the previous
             iteration. (This situation often happens when minimizing
             very simple functions, such as quadratics.)
   
             If the four conditions listed in Section 5 for IFAIL = 0 are
             satisfied, x is likely to be a solution of (1) even if IFAIL
             = 1.
   
        IFAIL= 2
             E04UCF has terminated without finding a feasible point for
             the linear constraints and bounds, which means that no
             feasible point exists for the given value of Linear
             Feasibility Tolerance (see Section 5.1). The user should
             check that there are no constraint redundancies. If the data
             for the constraints are accurate only to an absolute
             precision (sigma), the user should ensure that the value of
             the optional parameter Linear Feasibility Tolerance is
             greater than (sigma). For example, if all elements of A are
             of order unity and are accurate to only three decimal
                                                                       -3
             places, Linear Feasibility Tolerance should be at least 10  .
   
        IFAIL= 3
             No feasible point could be found for the nonlinear
             constraints. The problem may have no feasible solution. This
             means that there has been a sequence of QP subproblems for
             which no feasible point could be found (indicated by I at
             the end of each terse line of output). This behaviour will
             occur if there is no feasible point for the nonlinear
             constraints. (However, there is no general test that can
             determine whether a feasible point exists for a set of
             nonlinear constraints.) If the infeasible subproblems occur
             from the very first major iteration, it is highly likely
             that no feasible point exists. If infeasibilities occur when
             earlier subproblems have been feasible, small constraint
             inconsistencies may be present. The user should check the
             validity of constraints with negative values of ISTATE. If
             the user is convinced that a feasible point does exist,
             E04UCF should be restarted at a different starting point.
   
        IFAIL= 4
             The limiting number of iterations (determined by the
             optional parameter Major Iteration Limit see Section 5.1)
             has been reached.
   
             If the algorithm appears to be making progress, Major
             Iteration Limit may be too small. If so, increase its value
             and rerun E04UCF (possibly using the Warm Start option). If
             the algorithm seems to be 'bogged down', the user should
             check for incorrect gradients or ill-conditioning as
             described below under IFAIL = 6.
   
             Note that ill-conditioning in the working set is sometimes
             resolved automatically by the algorithm, in which case
             performing additional iterations may be helpful. However,
             ill-conditioning in the Hessian approximation tends to
             persist once it has begun, so that allowing additional
             iterations without altering R is usually inadvisable. If the
             quasi-Newton update of the Hessian approximation was
             modified during the latter iterations (i.e., an M occurs at
             the end of each terse line), it may be worthwhile to try a
             warm start at the final point as suggested above.
   
        IFAIL= 6
             x does not satisfy the first-order Kuhn-Tucker conditions,
             and no improved point for the merit function could be found
             during the final line search.
   
             A sufficient decrease in the merit function could not be
             attained during the final line search. This sometimes occurs
             because an overly stringent accuracy has been requested,
             i.e., Optimality Tolerance is too small. In this case the
             user should apply the four tests described under IFAIL = 0
             above to determine whether or not the final solution is
             acceptable (see Gill et al [10], for a discussion of the
             attainable accuracy).
   
             If many iterations have occurred in which essentially no
             progress has been made and E04UCF has failed completely to
             move from the initial point then subroutines OBJFUN or
             CONFUN may be incorrect. The user should refer to comments
             below under IFAIL = 7 and check the gradients using the
             Verify parameter. Unfortunately, there may be small errors
             in the objective and constraint gradients that cannot be
             detected by the verification process. Finite-difference
             approximations to first derivatives are catastrophically
             affected by even small inaccuracies. An indication of this
             situation is a dramatic alteration in the iterates if the
             finite-difference interval is altered. One might also
             suspect this type of error if a switch is made to central
             differences even when Norm Gz and Norm C are large.
   
             Another possibility is that the search direction has become
             inaccurate because of ill-conditioning in the Hessian
             approximation or the matrix of constraints in the working
             set; either form of ill-conditioning tends to be reflected
             in large values of ItQP (the number of iterations required
             to solve each QP subproblem).
   
             If the condition estimate of the projected Hessian (Cond Hz)
             is extremely large, it may be worthwhile to rerun E04UCF
             from the final point with the Warm Start option. In this
             situation, ISTATE should be left unaltered and R should be
             reset to the identity matrix.
   
             If the matrix of constraints in the working set is ill-
             conditioned (i.e., Cond T is extremely large), it may be
             helpful to run E04UCF with a relaxed value of the
             Feasibility Tolerance (Constraint dependencies are often
             indicated by wide variations in size in the diagonal
             elements of the matrix T, whose diagonals will be printed
             for Major Print Level >= 30).
   
        IFAIL= 7
             The user-provided derivatives of the objective function
             and/or nonlinear constraints appear to be incorrect.
   
             Large errors were found in the derivatives of the objective
             function and/or nonlinear constraints. This value of IFAIL
             will occur if the verification process indicated that at
             least one gradient or Jacobian component had no correct
             figures. The user should refer to the printed output to
             determine which elements are suspected to be in error.
   
             As a first-step, the user should check that the code for the
             objective and constraint values is correct - for example, by
             computing the function at a point where the correct value is
             known. However, care should be taken that the chosen point
             fully tests the evaluation of the function. It is remarkable
             how often the values x=0 or x=1 are used to test function
             evaluation procedures, and how often the special properties
             of these numbers make the test meaningless.
   
             Special care should be used in this test if computation of
             the objective function involves subsidiary data communicated
             in COMMON storage. Although the first evaluation of the
             function may be correct, subsequent calculations may be in
             error because some of the subsidiary data has accidently
             been overwritten.
   
             Errors in programming the function may be quite subtle in
             that the function value is 'almost' correct. For example,
             the function may not be accurate to full precision because
             of the inaccurate calculation of a subsidiary quantity, or
             the limited accuracy of data upon which the function
             depends. A common error on machines where numerical
             calculations are usually performed in double precision is to
             include even one single-precision constant in the
             calculation of the function; since some compilers do not
             convert such constants to double precision, half the correct
             figures may be lost by such a seemingly trivial error.
   
        IFAIL= 9
             An input parameter is invalid. The user should refer to the
             printed output to determine which parameter must be
             redefined.
   
        IFAILOverflow
             If the printed output before the overflow error contains a
             warning about serious ill-conditioning in the working set
             when adding the jth constraint, it may be possible to avoid
             the difficulty by increasing the magnitude of the optional
             parameter Linear Feasiblity Tolerance or Nonlinear
             Feasiblity Tolerance, and rerunning the program. If the
             message recurs even after this change, the offending
             linearly dependent constraint (with index 'j') must be
             removed from the problem. If overflow occurs in one of the
             user-supplied routines (e.g. if the nonlinear functions
             involve exponentials or singularities), it may help to
             specify tighter bounds for some of the variables (i.e.,
             reduce the gap between appropriate l  and u ).
                                                 j      j
   
        7. Accuracy
   
        If IFAIL = 0 on exit then the vector returned in the array X is
        an estimate of the solution to an accuracy of approximately
        Feasiblity Tolerance (see Section 5.1), whose default value is
                 0.8
        (epsilon)   , where (epsilon) is the machine precision (see
        X02AJF(*)).
   
        8. Further Comments
   
        In this section we give some further details of the method used
        by E04UCF.
   
        8.1. Solution of the Quadratic Programming Subproblem
   
        The search direction p is obtained by solving (4) using the
        method of E04NCF(*) (Gill et al [8]), which was specifically
        designed to be used within an SQP algorithm for nonlinear
        programming.
   
        The method of E04UCF is a two-phase (primal) quadratic
        programming method. The two phases of the method are: finding an
        initial feasible point by minimizing the sum of infeasibilities
        (the feasibility phase), and minimizing the quadratic objective
        function within the feasible region (the optimality phase). The
        computations in both phases are perfomed by the same subroutines.
        The two-phase nature of the algorithm is reflected by changing
        the function being minimized from the sum of infeasibilities to
        the quadratic objective function.
   
        In general, a quadratic program must be solved by iteration. Let
        p denote the current estimate of the solution of (4); the new
                _
        iterate p is defined by
   
                               _
                               p=p+(sigma)d,                          (9)
   
        where, as in (3), (sigma) is a non-negative step length and d is
        a search direction.
   
        At the beginning of each iteration of E04UCF, a working set is
        defined of constraints (general and bound) that are satisfied
        exactly. The vector d is then constructed so that the values of
        constraints in the working set remain unaltered for any move
        along d. For a bound constraint in the working set, this property
        is achieved by setting the corresponding component of d to zero,
        i.e., by fixing the variable at its bound. As before, the
        subscripts 'FX' and 'FR' denote selection of the components
        associated with the fixed and free variables.
   
        Let C denote the sub-matrix of rows of
   
                                      (A )
                                      ( L)
                                      (A )
                                      ( N)
   
        corresponding to general constraints in the working set. The
        general constraints in the working set will remain unaltered if
   
                                 C  d  =0,                           (10)
                                  FR FR
   
        which is equivalent to defining d   as
                                         FR
   
                                  d  =Zd                             (11)
                                   FR   z
   
        for some vector d , where Z is the matrix associated with the TQ
                         z
        factorization (5) of C  .
                              FR
   
        The definition of d  in (11) depends on whether the current p is
                           z
        feasible. If not, d  is zero except for a component (gamma) in
                           z
        the jth position, where j and (gamma) are chosen so that the sum
        of infeasibilities is decreasing along d. (For further details,
        see Gill et al [8].) In the feasible case, d  satisfies the
                                                    z
        equations
   
                               T       T
                              R R d =-Z q  ,                         (12)
                               z z z     FR
   
                                            T
        where R  is the Cholesky factor of Z H  Z and q is the gradient
               z                              FR
                                                                   T
        of the quadratic objective function (q=g+Hp). (The vector Z q
                                                                     FR
        is the projected gradient of the QP.) With (12), P+d is the
        minimizer of the quadratic objective function subject to treating
        the constraints in the working set as equalities.
   
        If the QP projected gradient is zero, the current point is a
        constrained stationary point in the subspace defined by the
        working set. During the feasiblity phase, the projected gradient
        will usually be zero only at a vertex (although it may vanish at
        non-vertices in the presence of constraint dependencies). During
        the optimality phase, a zero projected gradient implies that p
        minimizes the quadratic objective function when the constraints
        in the working set are treated as equalities. In either case,
        Lagrange multipliers are computed. Given a positive constant
        (delta) of the order of the machine precision, the Lagrange
        multiplier (mu)  corresponding to an inequality constraint in the
                       j
        working set at its upper bound is said to be optimal if
        (mu) <=(delta) when the jth constraint is at its upper bound, or
            j
        if (mu) >=-(delta) when the associated constraint is at its lower
               j
        bound. If any multiplier is non-optimal, the current objective
        function (either the true objective or the sum of
        infeasibilities) can be reduced by deleting the corresponding
        constraint from the working set.
   
        If optimal multipliers occur during the feasibility phase and the
        sum of infeasibilities is non-zero, no feasible point exists. The
        QP algorithm will then continue iterating to determine the
        minimum sum of infeasibilities. At this point, the Lagrange
        multiplier (mu)  will satisfy -(1+(delta))<=(mu) <=(delta) for an
                       j                                j
        inequality constraint at its upper bound, and
        -(delta)<=(mu) <=1+(delta) for an inequality at its lower bound.
                      j
        The Lagrange multiplier for an equality constraint will satisfy
        |(mu) |<=1+(delta).
             j
   
        The choice of step length (sigma) in the QP iteration (9) is
        based on remaining feasible with respect to the satisfied
        constraints. During the optimality phase, if p+d is feasible,
        (sigma) will be taken as unity. (In this case, the projected
                    _
        gradient at p will be zero.) Otherwise, (sigma) is set to
        (sigma) , the step to the 'nearest'constraint, which is added to
               M
        the working set at the next iteration.
   
        Each change in the working set leads to a simple change to C  :
                                                                    FR
        if the status of a general constraint changes, a row of C   is
                                                                 FR
        altered; if a bound constraint enters or leaves the working set,
        a column of C   changes. Explicit representations are recurred of
                     FR
                                                       T       T
        the matrices T, Q   and R, and of the vectors Q q and Q g.
                         FR
   
        8.2. The Merit Function
   
        After computing the search direction as described in Section 3,
        each major iteration proceeds by determining a step length
        (alpha) in (3) that produces a 'sufficient decrease' in the
        augmented Lagrangian merit function
   
   
                              --
        L(x,(lambda),s)=F(x)- > (lambda) (c (x)-s )
                              --        i  i     i
                              i
   
                              1 --                2
                            + - > (rho) (c (x)-s ) ,                 (13)
                              2 --     i  i     i
                                i
   
        where x, (lambda) and s vary during the linsearch. The summation
        terms in (13) involve only the nonlinear constraints. The vector
        (lambda) is an estimate of the Lagrange multipliers for the
        nonlinear constraints of (1). The non-negative slack variables
        {s } allow nonlinear inequality constraints to be treated without
          i
        introducing discontinuities. The solution of the QP subproblem
        (4) provides a vector triple that serves as a direction of search
        for the three sets of variables. The non-negative vector (rho) of
        penalty parameters is initialised to zero at the beginning of the
        first major iteration. Thereafter, selected components are
        increased whenever necessary to ensure descent for the merit
        function. Thus, the sequence of norms of (rho) (the printed
        quantity Penalty, see Section 5.2) is generally non-decreasing,
        although each (rho)  may be reduced a limited number of times.
                           i
   
        The merit function (13) and its global convergence properties are
        described in Gill et al [9].
   
        8.3. The Quasi-Newton Update
   
        The matrix H in (4) is a positive-definite quasi-Newton
        approximation to the Hessian of the Lagrangian function. (For a
        review of quasi-Newton methods, see Dennis and Schnabel [3].) At
                                                                     _
        the end of each major iteration, a new Hessian approximation H is
        defined as a rank-two modification of H. In E04UCF, the BFGS
        quasi-Newton update is used:
   
                          _     1     T    1   T
                          H=H- ----Hss H+ ---yy ,                    (14)
                                T          T
                               s Hs       y s
   
                _
        where s=x-x (the change in x).
   
        In E04UCF, H is required to be positive-definite. If H is
                           _
        positive-definite, H defined by (14) will be positive-definite if
                     T
        and only if y s is positive (see, e.g. Dennis and More [1]).
        Ideally, y in (14) would be taken as y , the change in gradient
                                              L
        of the Lagrangian function
   
                             _ _T         T
                          y =g-A (mu) -g+A (mu) ,                    (15)
                           L    N    N    N    N
   
        where (mu)  denotes the QP multipliers associated with the
                  N
                                                           T
        nonlinear constraints of the original problem. If y s is not
                                                           L
        sufficiently positive, an attempt is made to perform the update
        with a vector y of the form
   
                          m
                           N
                          --             _    _
                    y=y + >  (omega) (a (x)c (x)-a (x)c (x)),
                       L  --        i  i    i     i    i
                          i=1
   
        where (omega) >=0. If no such vector can be found, the update is
                     i
        perfomed with a scaled y ; in this case, M is printed to indicate
                                L
        that the update is modified.
   
        Rather than modifying H itself, the Cholesky factor of the
        transformed Hessian H  (6) is updated, where Q is the matrix from
                             Q
        (5) associated with the active set of the QP subproblem. The
        update (13) is equivalent to the following update to H :
                                                              Q
   
                     _        1        T     1     T
                     H =H - ------H s s H + ----y y ,                (16)
                      Q  Q   T     Q Q Q Q   T   Q Q
                            s H s           y s
                             Q Q Q           Q Q
   
                  T           T
        where y =Q y, and s =Q s. This update may be expressed as a rank-
               Q           Q
        one update to R (see Dennis and Schnabel [2]).
   
        9. Example
   
        This section describes one version of the so-called 'hexagon'
        problem (a different formulation is given as Problem 108 in Hock
        and Schittkowski [11]). The problem is to determine the hexagon
        of maximum area such that no two of its vertices are more than
        one unit apart (the solution is not a regular hexagon).
   
        All constraint types are included (bounds, linear, nonlinear),
        and the Hessian of the Lagrangian function is not positive-
        definite at the solution. The problem has nine variables, non-
        infinite bounds on seven of the variables, four general linear
        constraints, and fourteen nonlinear constraints.
   
        The objective function is
   
                      F(x)=-x x +x x -x x -x x +x x +x x .
                             2 6  1 7  3 7  5 8  4 9  3 8
   
        The bounds on the variables are
   
             x >=0, -1<=x <=1, x >=0,x >=0, x >=0,x <=0, and x <=0.
              1          3      5     6      7     8          9
   
        Thus,
   
                                                             T
                  l =(0,-infty,-1,-infty,0,0,0,-infty,-infty)
                   B
   
                                                               T
                 u =(infty,infty,1,infty,infty,infty,infty,0,0)
                  B
   
        The general linear constraints are
   
             x -x >=0,x -x >=0, x -x >=0,and x -x >=0.
              2  1     3  2      3  4         4  5
   
        Hence,
   
                  (0)     (-1  1 0  0  0 0 0 0 0)        (infty)
                  (0)     ( 0 -1 1  0  0 0 0 0 0)        (infty)
               l =(0), A =( 0  0 1 -1  0 0 0 0 0) and u =(infty).
                L (0)   L ( 0  0 0  1 -1 0 0 0 0)      L (infty)
   
        The nonlinear constraints are all of the form c (x)<=1, for
                                                       i
        i=1,2,...,14; hence, all components of l  are -infty, and all
                                                N
        components of u  are 1. The fourteen functions {c (x)} are
                       N                                 i
   
                    2  2
             c (x)=x +x ,
              1     1  6
   
                          2        2
             c (x)=(x -x ) +(x -x ) ,
              2      2  1     7  6
   
                          2  2
             c (x)=(x -x ) +x ,
              3      3  1    6
   
                          2        2
             c (x)=(x -x ) +(x -x ) ,
              4      1  4     6  8
   
                          2        2
             c (x)=(x -x ) +(x -x ) ,
              5      1  5     6  9
   
                    2  2
             c (x)=x +x ,
              6     2  7
   
                          2  2
             c (x)=(x -x ) +x ,
              7      3  2    7
   
                          2        2
             c (x)=(x -x ) +(x -x ) ,
              8      4  2     8  7
   
                          2        2
             c (x)=(x -x ) +(x -x ) ,
              9      2  5     7  9
   
                           2  2
             c  (x)=(x -x ) +x ,
              10      4  3    8
   
                           2  2
             c  (x)=(x -x ) +x ,
              11      5  3    9
   
                     2  2
             c  (x)=x +x ,
              12     4  8
   
                           2        2
             c  (x)=(x -x ) +(x -x ) ,
              13      4  5     9  8
   
                     2  2
             c  (x)=x +x .
              14     5  9
   
        An optimal solution (to five figures) is
   
   
         *
        x =(0.060947,0.59765,1.0,0.59765,0.060947,0.34377,0.5,
                         T
            -0.5,0.34377) ,
   
               *
        and F(x )=-1.34996. (The optimal objective function is unique,
        but is achieved for other values of x.) Five nonlinear
                                                        *
        constraints and one simple bound are active at x . The sample
        solution output is given later in this section, following the
        sample main program and problem definition.
   
        Two calls are made to E04UCF in order to demonstrate some of its
        features. For the first call, the starting point is:
   
                                                                      T
         x =(0.1,0.125,0.666666,0.142857,0.111111,0.2,0.25,-0.2,-0.25) .
          0
   
        All objective and constraint derivatives are specified in the
        user-provided subroutines OBJFN1 and CONFN1, i.e., the default
        option Derivative Level =3 is used.
   
        On completion of the first call to E04UCF, the optimal variables
        are perturbed to produce the initial point for a second run in
        which the problem functions are defined by the subroutines OBJFN2
        and CONFN2. To illustrate one of the finite-difference options in
        E04UCF, these routines are programmed so that the first six
        components of the objective gradient and the constant elements of
        the Jacobian matrix are not specified; hence, the option
        Derivative Level =0 is chosen. During computation of the finite-
        difference intervals, the constant Jacobian elements are
        identified and set, and E04UCF automatically increases the
        derivative level to 2.
   
        The second call to E04UCF illustrates the use of the Warm Start
        Level option to utilize the final active set, nonlinear
        multipliers and approximate Hessian from the first run. Note that
        Hessian = Yes was specified for the first run so that the array R
        would contain the Cholesky factor of the approximate Hessian of
        the Lagrangian.
   
        The two calls to E04UCF illustrate the alternative methods of
        assigning default parameters. (There is no special significance
        in the order of these assignments; an options file may just as
        easily be used to modify parameters set by E04UEF.)
   
        The results are typical of those obtained from E04UCF when
        solving well behaved (non-trivial) nonlinear problems. The
        approximate Hessian and working set remain relatively well-
        conditioned. Similarly the penalty parameters remain small and
        approximately constant. The numerical results illustrate much of
        the theoretically predicted behaviour of a quasi-Newton SQP
        method. As x approaches the solution, only one minor iteration is
        perfomed per major iteration, and the Norm Gz and Norm C columns
        exhibit the fast linear convergence rate mentioned in Sections 5
        and 6. Note that the constraint violations converge earlier than
        the projected gradient. The final values of the project gradient
        norm and constraint norm reflect the limiting accuracy of the two
        quantities. It is possible to achieve almost full precision in
        the constraint norm but only half precision in the projected
        gradient norm. Note that the final accuracy in the nonlinear
        constraints is considerably better than the feasibility
        tolerance, because the constraint violations are being refined
        during the last few iterations while the algorithm is working to
        reduce the projected gradient norm. In this problem, the
        constraint values and Lagrange multipliers at the solution are '
        well balanced', i.e., all the multipliers are approximately the
        same order of magnitude. The behaviour is typical of a well-
        scaled problem.
   
        The example program is not reproduced here. The source code for
        all example programs is distributed with the NAG Foundation
        Library software and should be available on-line.
\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04udf}{NAG On-line Documentation: e04udf}
\beginscroll
\begin{verbatim}



     E04UDF(3NAG)      Foundation Library (12/10/92)      E04UDF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04UDF
                  E04UDF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          To supply optional parameters to E04UCF from an external file.

          2. Specification

                 SUBROUTINE E04UDF (IOPTNS, INFORM)
                 INTEGER          IOPTNS, INFORM

          3. Description

          E04UDF may be used to supply values for optional parameters to
          E04UCF. E04UDF reads an external file and each line of the file
          defines a single optional parameter. It is only necessary to
          supply values for those parameters whose values are to be
          different from their default values.

          Each optional parameter is defined by a single character string
          of up to 72 characters, consisting of one or more items. The
          items associated with a given option must be separated by spaces,
          or equal signs (=). Alphabetic characters may be upper or lower
          case. The string

                 Print level = 1

          is an example of a string used to set an optional parameter. For
          each option the string contains one or more of the following
          items:

          (a)   A mandatory keyword.

          (b)   A phrase that qualifies the keyword.

          (c)   A number that specifies an INTEGER or real value. Such
                numbers may be up to 16 contiguous characters in Fortran
                77's I, F, E or D formats, terminated by a space if this is
                not the last item on the line.

          Blank strings and comments are ignored. A comment begins with an
          asterisk (*) and all subsequent characters in the string are
          regarded as part of the comment.

          The file containing the options must start with begin and must
          finish with end An example of a valid options file is:

                Begin * Example options file
                 Print level =10
                End

          Normally each line of the file is printed as it is read, on the
          current advisory message unit (see X04ABF), but printing may be
          suppressed using the keyword nolist To suppress printing of begin,
          nolist must be the first option supplied as in the file:

                Begin
                  Nolist
                  Print level = 10
                End

          Printing will automatically be turned on again after a call to
          E04UCF and may be turned on again at any time by the user by
          using the keyword list.

          Optional parameter settings are preserved following a call to
          E04UCF, and so the keyword defaults is provided to allow the user
          to reset all the optional parameters to their default values
          prior to a subsequent call to E04UCF.

          A complete list of optional parameters, their abbreviations,
          synonyms and default values is given in Section 5.1 of the
          document for E04UCF.

          4. References

          None.

          5. Parameters

           1:  IOPTNS -- INTEGER                                      Input
               On entry: IOPTNS must be the unit number of the options
               file. Constraint: 0 <= IOPTNS <= 99.

           2:  INFORM -- INTEGER                                     Output
               On exit: INFORM will be zero, if an options file with the
               current structure has been read. Otherwise INFORM will be
               positive. Positive values of INFORM indicate that an options
               file may not have been successfully read as follows:
               INFORM = 1
                     IOPTNS is not in the range [0,99].

               INFORM = 2
                     begin was found, but end-of-file was found before end
                     was found.

               INFORM = 3
                     end-of-file was found before begin was found.

          6. Error Indicators and Warnings

          If a line is not recognised as a valid option, then a warning
          message is output on the current advisory message unit (X04ABF).

          7. Accuracy

          Not applicable.

          8. Further Comments

          E04UEF may also be used to supply optional parameters to E04UCF.

          9. Example

          See the example for E04UCF.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04uef}{NAG On-line Documentation: e04uef}
\beginscroll
\begin{verbatim}



     E04UEF(3NAG)      Foundation Library (12/10/92)      E04UEF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04UEF
                  E04UEF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          To supply individual optional parameters to E04UCF.

          2. Specification

                 SUBROUTINE E04UEF (STRING)
                 CHARACTER*(*)    STRING

          3. Description

          E04UEF may be used to supply values for optional parameters to
          E04UCF. It is only necessary to call E04UEF for those parameters
          whose values are to be different from their default values. One
          call to E04UEF sets one parameter value.

          Each optional parameter is defined by a single character string
          of up to 72 characters, consisting of one or more items. The
          items associated with a given option must be separated by spaces,
          or equal signs (=). Alphabetic characters may be upper or lower
          case. The string

                Print level = 1

          is an example of a string used to set an optional parameter. For
          each option the string contains one or more of the following
          items:

          (a)   A mandatory keyword.

          (b)   A phrase that qualifies the keyword.

          (c)   A number that specifies an INTEGER or real value. Such
                numbers may be up to 16 contiguous characters in Fortran
                77's I, F, E or D formats, terminated by a space if this is
                not the last item on the line.

          Blank strings and comments are ignored. A comment begins with an
          asterisk (*) and all subsequent characters in the string are
          regarded as part of the comment.

          Normally, each user-specified option is printed as it is defined,
          on the current advisory message unit (see X04ABF), but this
          printing may be suppressed using the keyword nolist Thus the
          statement

                CALL E04UEF (`Nolist')

          suppresses printing of this and subsequent options. Printing will
          automatically be turned on again after a call to E04UCF, and may
          be turned on again at any time by the user, by using the keyword
          list.

          Optional parameter settings are preserved following a call to
          E04UCF, and so the keyword defaults is provided to allow the user
          to reset all the optional parameters to their default values by
          the statement,

                CALL E04UEF (`Defaults')

          prior to a subsequent call to E04UCF.

          A complete list of optional parameters, their abbreviations,
          synonyms and default values is given in Section 5.1 of the
          document for E04UCF.

          4. References

          None.

          5. Parameters

           1:  STRING -- CHARACTER*(*)                                Input
               On entry: STRING must be a single valid option string. See
               Section 3 above and Section 5.1 of the routine document for
               E04UCF. On entry: STRING must be a single valid option
               string. See Section 3 above and Section 5.1 of the routine
               document for E04UCF.

          6. Error Indicators and Warnings

          If the parameter STRING is not recognised as a valid option
          string, then a warning message is output on the current advisory
          message unit (X04ABF).

          7. Accuracy

          Not applicable.

          8. Further Comments

          E04UDF may also be used to supply optional parameters to E04UCF.

          9. Example

          See the example for E04UCF.

\end{verbatim}
\endscroll
\end{page}
\begin{page}{manpageXXe04ycf}{NAG On-line Documentation: e04ycf}
\beginscroll
\begin{verbatim}



     E04YCF(3NAG)      Foundation Library (12/10/92)      E04YCF(3NAG)



          E04 -- Minimizing or Maximizing a Function                 E04YCF
                  E04YCF -- NAG Foundation Library Routine Document

          Note: Before using this routine, please read the Users' Note for
          your implementation to check implementation-dependent details.
          The symbol (*) after a NAG routine name denotes a routine that is
          not included in the Foundation Library.

          1. Purpose

          E04YCF returns estimates of elements of the variance-covariance
          matrix of the estimated regression coefficients for a nonlinear
          least squares problem. The estimates are derived from the
          Jacobian of the function f(x) at the solution.

          This routine may be used following any one of the nonlinear
          least-squares routines E04FCF(*), E04FDF, E04GBF(*), E04GCF,
          E04GDF(*), E04GEF(*), E04HEF(*), E04HFF(*).

          2. Specification

                 SUBROUTINE E04YCF (JOB, M, N, FSUMSQ, S, V, LV, CJ, WORK,
                1                   IFAIL)
                 INTEGER          JOB, M, N, LV, IFAIL
                 DOUBLE PRECISION FSUMSQ, S(N), V(LV,N), CJ(N), WORK(N)

          3. Description

          E04YCF is intended for use when the nonlinear least-squares
                          T
          function, F(x)=f (x)f(x), represents the goodness of fit of a
          nonlinear model to observed data. The routine assumes that the
          Hessian of F(x), at the solution, can be adequately approximated
               T
          by 2J J, where J is the Jacobian of f(x) at the solution. The
          estimated variance-covariance matrix C is then given by

                                 2  T  -1    T
                        C=(sigma) (J J)     J J non-singular,

                       2
          where (sigma)  is the estimated variance of the residual at the
                    

          solution, x, given by

                                               

                                          2  F(x)
                                   (sigma) = ----,
                                             m-n

          m being the number of observations and n the number of variables.

          The diagonal elements of C are estimates of the variances of the
          estimated regression coefficients. See the Chapter Introduction
          E04 and Bard [1] and Wolberg [2] for further information on the
          use of C.

                T
          When J J is singular then C is taken to be

                                           2  T  *
                                  C=(sigma) (J J) ,

                  T  *                           T
          where (J J)  is the pseudo-inverse of J J, but in this case the
          parameter IFAIL is returned as non-zero as a warning to the user
          that J has linear dependencies in its columns. The assumed rank
          of J can be obtained from IFAIL.

          The routine can be used to find either the diagonal elements of
          C, or the elements of the jth column of C, or the whole of C.

          E04YCF must be preceded by one of the nonlinear least-squares
          routines mentioned in Section 1, and requires the parameters
          FSUMSQ, S and V to be supplied by those routines. FSUMSQ is the
                                    

          residual sum of squares F(x), and S and V contain the singular
          values and right singular vectors respectively in the singular
          value decomposition of J. S and V are returned directly by the
          comprehensive routines E04FCF(*), E04GBF(*), E04GDF(*) and
          E04HEF(*), but are returned as part of the workspace parameter W
          from the easy-to-use routines E04FDF, E04GCF, E04GEF(*) and
          E04HFF(*). In the case of E04FDF, S starts at W(NS), where

                          NS=6*N+2*M+M*N+1+max(1,N*(N-1)/2)

          and in the cases of the remaining easy-to-use routines, S starts
          at W(NS), where

                     NS=7*N+2*M+M*N+N*(N+1)/2+1+max(1,N*(N-1)/2)

          The parameter V starts immediately following the elements of S,
          so that V starts at W(NV), where

                                      NV=NS+N.

          For all the easy-to-use routines the parameter LV must be
          supplied as N. Thus a call to E04YCF following E04FDF can be
          illustrated as


                 CALL E04FDF (M, N, X, FSUMSQ, IW, LIW, W, LW, IFAIL)
                 NS = 6*N + 2*M + M*N + 1 + MAX((1,(N*(N-1))/2)
                 NV = NS + N
                 CALL E04YCF (JOB, M, N, FSUMSQ, W(NS), W(NV),
                *             N, CJ, WORK, IFAIL)

                                                        2
          where the parameters M, N, FSUMSQ and the (n+n ) elements W(NS),
          WS(NS+1),..., W(NV+N*N-1) must not be altered between the calls
          to E04FDF and E04YCF. The above illustration also holds for a
          call to E04YCF following a call to one of E04GCF, E04GEF(*),
          E04HFF(*) except that NS must be computed as

                NS = 7*N + 2*M + M*N + (N*(N+1))/2 + 1 + MAX((1,N*(N-1))/2)

          4. References

          [1]   Bard Y (1974) Nonlinear Parameter Estimation. Academic
                Press.

          [2]   Wolberg J R (1967) Prediction Analysis. Van Nostrand.

          5. Parameters

           1:  JOB -- INTEGER                                         Input
               On entry: which elements of C are returned as follows:
               JOB = -1
                     The n by n symmetric matrix C is returned.

               JOB = 0
                     The diagonal elements of C are returned.

               JOB > 0
                     The elements of column JOB of C are returned.
                Constraint: -1 <= JOB <= N.

           2:  M -- INTEGER                                           Input
               On entry: the number m of observations (residuals f (x)).
                                                                  i
               Constraint: M >= N.

           3:  N -- INTEGER                                           Input
               On entry: the number n of variables (x ). Constraint: 1 <=
                                                     j
               N <= M.

           4:  FSUMSQ -- DOUBLE PRECISION                             Input
                                                                

               On entry: the sum of squares of the residuals, F(x), at the
                        

               solution x, as returned by the nonlinear least-squares
               routine. Constraint: FSUMSQ >= 0.0.

           5:  S(N) -- DOUBLE PRECISION array                         Input
               On entry: the n singular values of the Jacobian as returned
               by the nonlinear least-squares routine. See Section 3 for
               information on supplying S following one of the easy-to-use
               routines.

           6:  V(LV,N) -- DOUBLE PRECISION array               Input/Output
               On entry: the n by n right-hand orthogonal matrix (the
               right singular vectors) of J as returned by the nonlinear
               least-squares routine. See Section 3 for information on
               supplying V following one of the easy-to-use routines. On
               exit: when JOB >= 0 then V is unchanged.

               When JOB = -1 then the leading n by n part of V is
               overwritten by the n by n matrix C. When E04YCF is called
               with JOB = -1 following an easy-to-use routine this means
                                                             2
               that C is returned, column by column, in the n  elements of
                                                  2
               W given by W(NV),W(NV+1),...,W(NV+N -1). (See Section 3 for
               the definition of NV).

           7:  LV -- INTEGER                                          Input
               On entry:
               the first dimension of the array V as declared in the
               (sub)program from which E04YCF is called.
               When V is passed in the workspace parameter W following one
               of the easy-to-use least-square routines, LV must be the
               value N.

           8:  CJ(N) -- DOUBLE PRECISION array                       Output
               On exit: with JOB = 0, CJ returns the n diagonal elements
               of C.

               With JOB = j>0, CJ returns the n elements of the jth column
               of C.

               When JOB = -1, CJ is not referenced.

           9:  WORK(N) -- DOUBLE PRECISION array                  Workspace
               When JOB = -1 or 0 then WORK is used as internal workspace.

               When JOB > 0, WORK is not referenced.

          10:  IFAIL -- INTEGER                                Input/Output
               On entry: IFAIL must be set to 0, -1 or 1. Users who are
               unfamiliar with this parameter should refer to the Essential
               Introduction for details.

               On exit: IFAIL = 0 unless the routine detects an error or
               gives a warning (see Section 6).

               For this routine, because the values of output parameters
               may be useful even if IFAIL /=0 on exit, users are
               recommended to set IFAIL to -1 before entry. It is then
               essential to test the value of IFAIL on exit. To suppress
               the output of an error message when soft failure occurs, set
               IFAIL to 1.

          6. Error Indicators and Warnings

          Errors or warnings specified by the routine:

          IFAIL= 1
               On entry JOB < -1,

               or       JOB > N,

               or       N < 1,

               or       M < N,

               or       FSUMSQ < 0.0.

          IFAIL= 2
               The singular values are all zero, so that at the solution
               the Jacobian matrix J has rank 0.

          IFAIL> 2
               At the solution the Jacobian matrix contains linear, or near
               linear, dependencies amongst its columns. In this case the
               required elements of C have still been computed based upon J
               having an assumed rank given by (IFAIL-2). The rank is
               computed by regarding singular values SV(j) that are not
               larger than 10*(epsilon)*SV(1) as zero, where (epsilon) is
               the machine precision (see X02AJF(*)). Users who expect near
               linear dependencies at the solution and are happy with this
               tolerance in determining rank should call E04YCF with IFAIL
               = 1 in order to prevent termination by P01ABF(*). It is then
               essential to test the value of IFAIL on exit from E04YCF.

          IFAILOverflow
               If overflow occurs then either an element of C is very
               large, or the singular values or singular vectors have been
               incorrectly supplied.

          7. Accuracy

          The computed elements of C will be the exact covariances
          corresponding to a closely neighbouring Jacobian matrix J.

          8. Further Comments

          When JOB = -1 the time taken by the routine is approximately
                           3
          proportional to n . When JOB >= 0 the time taken by the routine
                                            2
          is approximately proportional to n .

          9. Example

          To estimate the variance-covariance matrix C for the least-
          squares estimates of x , x  and x  in the model
                                1   2      3

                                            t
                                             1
                                   y=x + ---------
                                      1  x t +x t
                                          2 2  3 3

          using the 15 sets of data given in the following table:

                                  y    t   t    t
                                        1   2    3
                                 0.14  1.0 15.0 1.0
                                 0.18  2.0 14.0 2.0
                                 0.22  3.0 13.0 3.0
                                 0.25  4.0 12.0 4.0
                                 0.29  5.0 11.0 5.0
                                 0.32  6.0 10.0 6.0
                                 0.35  7.0  9.0 7.0
                                 0.39  8.0  8.0 8.0
                                 0.37  9.0  7.0 7.0
                                 0.58 10.0  6.0 6.0
                                 0.73 11.0  5.0 5.0
                                 0.96 12.0  4.0 4.0
                                 1.34 13.0  3.0 3.0
                                 2.10 14.0  2.0 2.0
                                 4.39 15.0  1.0 1.0

          The program uses (0.5,1.0,1.5) as the initial guess at the
          position of the minimum and computes the least-squares solution
          using E04FDF. See the routine document E04FDF for further
          information.

          The example program is not reproduced here. The source code for
          all example programs is distributed with the NAG Foundation
          Library software and should be available on-line.

\end{verbatim}
\endscroll
\end{page}