Parallel Colt 0.7.2

optimization
Class Uncmin_f77

java.lang.Object
  extended by optimization.Uncmin_f77

public class Uncmin_f77
extends Object

This class contains Java translations of the UNCMIN unconstrained optimization routines. See R.B. Schnabel, J.E. Koontz, and B.E. Weiss, A Modular System of Algorithms for Unconstrained Minimization, Report CU-CS-240-82, Comp. Sci. Dept., University of Colorado at Boulder, 1982.

IMPORTANT: The "_f77" suffixes indicate that these routines use FORTRAN style indexing. For example, you will see

   for (i = 1; i <= n; i++)
rather than
   for (i = 0; i < n; i++)
To use the "_f77" routines you will have to declare your vectors and matrices to be one element larger (e.g., v[101] rather than v[100], and a[101][101] rather than a[100][100]), and you will have to fill elements 1 through n rather than elements 0 through n - 1. Versions of these programs that use C/Java style indexing will eventually be available. They will end with the suffix "_j".

This class was translated by a statistician from a FORTRAN version of UNCMIN. It is NOT an official translation. It wastes memory by failing to use the first elements of vectors. When public domain Java optimization routines become available from the people who produced UNCMIN, then THE CODE PRODUCED BY THE NUMERICAL ANALYSTS SHOULD BE USED.

Meanwhile, if you have suggestions for improving this code, please contact Steve Verrill at steve@ws10.fpl.fs.fed.us.

Version:
.5 --- April 14, 1998
Author:
(translator)Steve Verrill

Constructor Summary
Uncmin_f77()
           
 
Method Summary
static void bakslv_f77(int n, double[][] a, double[] x, double[] b)
           The bakslv_f77 method solves Ax = b where A is an upper triangular matrix.
static void chlhsn_f77(int n, double[][] a, double epsm, double[] sx, double[] udiag)
           The chlhsn_f77 method finds "THE L(L-TRANSPOSE) [WRITTEN LL+] DECOMPOSITION OF THE PERTURBED MODEL HESSIAN MATRIX A+MU*I(WHERE MU\0 AND I IS THE IDENTITY MATRIX) WHICH IS SAFELY POSITIVE DEFINITE.
static void choldc_f77(int n, double[][] a, double diagmx, double tol, double[] addmax)
           The choldc_f77 method finds "THE PERTURBED L(L-TRANSPOSE) [WRITTEN LL+] DECOMPOSITION OF A+D, WHERE D IS A NON-NEGATIVE DIAGONAL MATRIX ADDED TO A IF NECESSARY TO ALLOW THE CHOLESKY DECOMPOSITION TO CONTINUE." Translated by Steve Verrill, April 15, 1998.
static void dfault_f77(int n, double[] x, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl)
           The dfault_f77 method sets default values for each input variable to the minimization algorithm.
static void dogdrv_f77(int n, double[] x, double[] f, double[] g, double[][] a, double[] p, double[] xpls, double[] fpls, optimization.Uncmin_methods minclass, double[] sx, double[] stepmx, double[] steptl, double[] dlt, int[] iretcd, boolean[] mxtake, double[] sc, double[] wrk1, double[] wrk2, double[] wrk3)
           The dogdrv_f77 method finds the next Newton iterate (xpls) by the double dogleg method.
static void dogstp_f77(int n, double[] g, double[][] a, double[] p, double[] sx, double rnwtln, double[] dlt, boolean[] nwtake, boolean[] fstdog, double[] ssd, double[] v, double[] cln, double[] eta, double[] sc, double[] stepmx)
           The dogstp_f77 method finds the new step by the double dogleg appproach.
static void forslv_f77(int n, double[][] a, double[] x, double[] b)
           The forslv_f77 method solves Ax = b where A is a lower triangular matrix.
static void fstocd_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] sx, double rnoise, double[] g)
           The fstocd_f77 method finds a central difference approximation to the gradient of the function to be minimized.
static void fstofd_f77(int n, double[] xpls, optimization.Uncmin_methods minclass, double[] fpls, double[][] a, double[] sx, double rnoise, double[] fhat)
           This version of the fstofd_f77 method finds a finite difference approximation to the Hessian.
static void fstofd_f77(int n, double[] xpls, optimization.Uncmin_methods minclass, double[] fpls, double[] g, double[] sx, double rnoise)
           This version of the fstofd_f77 method finds first order finite difference approximations for gradients.
static void grdchk_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] f, double[] g, double[] typsiz, double[] sx, double[] fscale, double rnf, double analtl, double[] gest)
           The grdchk_f77 method checks the analytic gradient supplied by the user.
static void heschk_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] f, double[] g, double[][] a, double[] typsiz, double[] sx, double rnf, double analtl, int[] iagflg, double[] udiag, double[] wrk1, double[] wrk2)
           The heschk_f77 method checks the analytic Hessian supplied by the user.
static void hookdr_f77(int n, double[] x, double[] f, double[] g, double[][] a, double[] udiag, double[] p, double[] xpls, double[] fpls, optimization.Uncmin_methods minclass, double[] sx, double[] stepmx, double[] steptl, double[] dlt, int[] iretcd, boolean[] mxtake, double[] amu, double[] dltp, double[] phi, double[] phip0, double[] sc, double[] xplsp, double[] wrk0, double epsm, int[] itncnt)
           The hookdr_f77 method finds a next Newton iterate (xpls) by the More-Hebdon technique.
static void hookst_f77(int n, double[] g, double[][] a, double[] udiag, double[] p, double[] sx, double rnwtln, double[] dlt, double[] amu, double[] dltp, double[] phi, double[] phip0, boolean[] fstime, double[] sc, boolean[] nwtake, double[] wrk0, double epsm)
           The hookst_f77 method finds a new step by the More-Hebdon algorithm.
static void hsnint_f77(int n, double[][] a, double[] sx, int[] method)
           The hsnint_f77 method provides the initial Hessian when secant updates are being used.
static void lltslv_f77(int n, double[][] a, double[] x, double[] b)
           The lltslv_f77 method solves Ax = b where A has the form L(L transpose) but only the lower triangular part, L, is stored.
static void lnsrch_f77(int n, double[] x, double[] f, double[] g, double[] p, double[] xpls, double[] fpls, optimization.Uncmin_methods minclass, boolean[] mxtake, int[] iretcd, double[] stepmx, double[] steptl, double[] sx)
           The lnsrch_f77 method finds a next Newton iterate by line search.
static void mvmltl_f77(int n, double[][] a, double[] x, double[] y)
           The mvmltl_f77 method computes y = Lx where L is a lower triangular matrix stored in A.
static void mvmlts_f77(int n, double[][] a, double[] x, double[] y)
           The mvmlts_f77 method computes y = Ax where A is a symmetric matrix stored in its lower triangular part.
static void mvmltu_f77(int n, double[][] a, double[] x, double[] y)
           The mvmltu_f77 method computes Y = (L transpose)X where L is a lower triangular matrix stored in A (L transpose is taken implicitly).
static void optchk_f77(int n, double[] x, double[] typsiz, double[] sx, double[] fscale, double[] gradtl, int[] itnlim, int[] ndigit, double epsm, double[] dlt, int[] method, int[] iexp, int[] iagflg, int[] iahflg, double[] stepmx, int[] msg)
           The optchk_f77 method checks the input for reasonableness.
static void optdrv_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag, double[] g, double[] p, double[] sx, double[] wrk0, double[] wrk1, double[] wrk2, double[] wrk3)
           The optdrv_f77 method is the driver for the nonlinear optimization problem.
static void optif0_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag)
           The optif0_f77 method minimizes a smooth nonlinear function of n variables.
static void optif9_f77(int n, double[] x, optimization.Uncmin_methods minclass, double[] typsiz, double[] fscale, int[] method, int[] iexp, int[] msg, int[] ndigit, int[] itnlim, int[] iagflg, int[] iahflg, double[] dlt, double[] gradtl, double[] stepmx, double[] steptl, double[] xpls, double[] fpls, double[] gpls, int[] itrmcd, double[][] a, double[] udiag)
           The optif9_f77 method minimizes a smooth nonlinear function of n variables.
static void optstp_f77(int n, double[] xpls, double[] fpls, double[] gpls, double[] x, int[] itncnt, int[] icscmx, int[] itrmcd, double[] gradtl, double[] steptl, double[] sx, double[] fscale, int[] itnlim, int[] iretcd, boolean[] mxtake, int[] msg)
           The optstp_f77 method determines whether the algorithm should terminate due to any of the following: 1) problem solved within user tolerance 2) convergence within user tolerance 3) iteration limit reached 4) divergence or too restrictive maximum step (stepmx) suspected Translated by Steve Verrill, May 12, 1998.
static void qraux1_f77(int n, double[][] r, int i)
           The qraux1_f77 method interchanges rows i,i+1 of the upper Hessenberg matrix r, columns i to n.
static void qraux2_f77(int n, double[][] r, int i, double a, double b)
           The qraux2_f77 method pre-multiplies r by the Jacobi rotation j(i,i+1,a,b).
static void qrupdt_f77(int n, double[][] a, double[] u, double[] v)
           The qrupdt_f77 method finds an orthogonal n by n matrix, Q*, and an upper triangular n by n matrix, R*, such that (Q*)(R*) = R+U(V+).
static void result_f77(int n, double[] x, double[] f, double[] g, double[][] a, double[] p, int[] itncnt, int iflg)
           The result_f77 method prints information.
static void sclmul_f77(int n, double s, double[] v, double[] z)
           The sclmul_f77 method multiplies a vector by a scalar.
static void secfac_f77(int n, double[] x, double[] g, double[][] a, double[] xpls, double[] gpls, double epsm, int[] itncnt, double rnf, int[] iagflg, boolean[] noupdt, double[] s, double[] y, double[] u, double[] w)
           The secfac_f77 method updates the Hessian by the BFGS factored technique.
static void secunf_f77(int n, double[] x, double[] g, double[][] a, double[] udiag, double[] xpls, double[] gpls, double epsm, int[] itncnt, double rnf, int[] iagflg, boolean[] noupdt, double[] s, double[] y, double[] t)
           The secunf_f77 method updates the Hessian by the BFGS unfactored approach.
static void sndofd_f77(int n, double[] xpls, optimization.Uncmin_methods minclass, double[] fpls, double[][] a, double[] sx, double rnoise, double[] stepsz, double[] anbr)
           The sndofd_f77 method finds second order forward finite difference approximations to the Hessian.
static void tregup_f77(int n, double[] x, double[] f, double[] g, double[][] a, optimization.Uncmin_methods minclass, double[] sc, double[] sx, boolean[] nwtake, double[] stepmx, double[] steptl, double[] dlt, int[] iretcd, double[] xplsp, double[] fplsp, double[] xpls, double[] fpls, boolean[] mxtake, int method, double[] udiag)
           The tregup_f77 method decides whether to accept xpls = x + sc as the next iterate and update the trust region dlt.
 
Methods inherited from class java.lang.Object
equals, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
 

Constructor Detail

Uncmin_f77

public Uncmin_f77()
Method Detail

optif0_f77

public static void optif0_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] xpls,
                              double[] fpls,
                              double[] gpls,
                              int[] itrmcd,
                              double[][] a,
                              double[] udiag)

The optif0_f77 method minimizes a smooth nonlinear function of n variables. A method that computes the function value at any point must be supplied. (See Uncmin_methods.java and UncminTest.java.) Derivative values are not required. The optif0_f77 method provides the simplest user access to the UNCMIN minimization routines. Without a recompile, the user has no control over options. For details, see the Schnabel et al reference and the comments in the code. Translated by Steve Verrill, August 4, 1998.

Parameters:
n - The number of arguments of the function to minimize
x - The initial estimate of the minimum point
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
xpls - The final estimate of the minimum point
fpls - The value of f_to_minimize at xpls
gpls - The gradient at the local minimum xpls
itrmcd - Termination code ITRMCD = 0: Optimal solution found ITRMCD = 1: Terminated with gradient small, xpls is probably optimal ITRMCD = 2: Terminated with stepsize small, xpls is probably optimal ITRMCD = 3: Lower point cannot be found, xpls is probably optimal ITRMCD = 4: Iteration limit (150) exceeded ITRMCD = 5: Too many large steps, function may be unbounded
a - Workspace for the Hessian (or its estimate) and its Cholesky decomposition
udiag - Workspace for the diagonal of the Hessian

optif9_f77

public static void optif9_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] typsiz,
                              double[] fscale,
                              int[] method,
                              int[] iexp,
                              int[] msg,
                              int[] ndigit,
                              int[] itnlim,
                              int[] iagflg,
                              int[] iahflg,
                              double[] dlt,
                              double[] gradtl,
                              double[] stepmx,
                              double[] steptl,
                              double[] xpls,
                              double[] fpls,
                              double[] gpls,
                              int[] itrmcd,
                              double[][] a,
                              double[] udiag)

The optif9_f77 method minimizes a smooth nonlinear function of n variables. A method that computes the function value at any point must be supplied. (See Uncmin_methods.java and UncminTest.java.) Derivative values are not required. The optif9 method provides complete user access to the UNCMIN minimization routines. The user has full control over options. For details, see the Schnabel et al reference and the comments in the code. Translated by Steve Verrill, August 4, 1998.

Parameters:
n - The number of arguments of the function to minimize
x - The initial estimate of the minimum point
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
typsiz - Typical size for each component of x
fscale - Estimate of the scale of the objective function
method - Algorithm to use to solve the minimization problem = 1 line search = 2 double dogleg = 3 More-Hebdon
iexp - = 1 if the optimization function f_to_minimize is expensive to evaluate, = 0 otherwise. If iexp = 1, then the Hessian will be evaluated by secant update rather than analytically or by finite differences.
msg - Message to inhibit certain automatic checks and output
ndigit - Number of good digits in the minimization function
itnlim - Maximum number of allowable iterations
iagflg - = 0 if an analytic gradient is not supplied
iahflg - = 0 if an analytic Hessian is not supplied
dlt - Trust region radius
gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
stepmx - Maximum allowable step size
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
xpls - The final estimate of the minimum point
fpls - The value of f_to_minimize at xpls
gpls - The gradient at the local minimum xpls
itrmcd - Termination code ITRMCD = 0: Optimal solution found ITRMCD = 1: Terminated with gradient small, X is probably optimal ITRMCD = 2: Terminated with stepsize small, X is probably optimal ITRMCD = 3: Lower point cannot be found, X is probably optimal ITRMCD = 4: Iteration limit (150) exceeded ITRMCD = 5: Too many large steps, function may be unbounded
a - Workspace for the Hessian (or its estimate) and its Cholesky decomposition
udiag - Workspace for the diagonal of the Hessian

bakslv_f77

public static void bakslv_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] b)

The bakslv_f77 method solves Ax = b where A is an upper triangular matrix. Note that A is input as a lower triangular matrix and this method takes its transpose implicitly. Translated by Steve Verrill, April 14, 1998.

Parameters:
n - Dimension of the problem
a - n by n lower triangular matrix (preserved)
x - The solution vector
b - The right-hand side vector

chlhsn_f77

public static void chlhsn_f77(int n,
                              double[][] a,
                              double epsm,
                              double[] sx,
                              double[] udiag)

The chlhsn_f77 method finds "THE L(L-TRANSPOSE) [WRITTEN LL+] DECOMPOSITION OF THE PERTURBED MODEL HESSIAN MATRIX A+MU*I(WHERE MU\0 AND I IS THE IDENTITY MATRIX) WHICH IS SAFELY POSITIVE DEFINITE. IF A IS SAFELY POSITIVE DEFINITE UPON ENTRY, THEN MU=0." Translated by Steve Verrill, April 14, 1998.

Parameters:
n - Dimension of the problem
a - On entry: A is the model Hessian (only the lower triangle and diagonal stored) On exit: A contains L of the LL+ decomposition of the perturbed model Hessian in the lower triangle and diagonal, and contains the Hessian in the upper triangle and udiag
epsm - Machine epsilon
sx - Scaling vector for x
udiag - On exit: Contains the diagonal of the Hessian

choldc_f77

public static void choldc_f77(int n,
                              double[][] a,
                              double diagmx,
                              double tol,
                              double[] addmax)

The choldc_f77 method finds "THE PERTURBED L(L-TRANSPOSE) [WRITTEN LL+] DECOMPOSITION OF A+D, WHERE D IS A NON-NEGATIVE DIAGONAL MATRIX ADDED TO A IF NECESSARY TO ALLOW THE CHOLESKY DECOMPOSITION TO CONTINUE." Translated by Steve Verrill, April 15, 1998.

Parameters:
n - Dimension of the problem
a - On entry: matrix for which to find the perturbed Cholesky decomposition On exit: contains L of the LL+ decomposition in lower triangle
diagmx - Maximum diagonal element of "A"
tol - Tolerance
addmax - Maximum amount implicitly added to diagonal of "A" in forming the Cholesky decomposition of A+D

dfault_f77

public static void dfault_f77(int n,
                              double[] x,
                              double[] typsiz,
                              double[] fscale,
                              int[] method,
                              int[] iexp,
                              int[] msg,
                              int[] ndigit,
                              int[] itnlim,
                              int[] iagflg,
                              int[] iahflg,
                              double[] dlt,
                              double[] gradtl,
                              double[] stepmx,
                              double[] steptl)

The dfault_f77 method sets default values for each input variable to the minimization algorithm. Translated by Steve Verrill, August 4, 1998.

Parameters:
n - Dimension of the problem
x - Initial estimate of the solution (to compute max step size)
typsiz - Typical size for each component of x
fscale - Estimate of the scale of the minimization function
method - Algorithm to use to solve the minimization problem
iexp - = 0 if the minimization function is not expensive to evaluate
msg - Message to inhibit certain automatic checks and output
ndigit - Number of good digits in the minimization function
itnlim - Maximum number of allowable iterations
iagflg - = 0 if an analytic gradient is not supplied
iahflg - = 0 if an analytic Hessian is not supplied
dlt - Trust region radius
gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
stepmx - "Value of zero to trip default maximum in optchk"
steptl - Tolerance at which successive iterates are considered close enough to terminate the algorithm

dogdrv_f77

public static void dogdrv_f77(int n,
                              double[] x,
                              double[] f,
                              double[] g,
                              double[][] a,
                              double[] p,
                              double[] xpls,
                              double[] fpls,
                              optimization.Uncmin_methods minclass,
                              double[] sx,
                              double[] stepmx,
                              double[] steptl,
                              double[] dlt,
                              int[] iretcd,
                              boolean[] mxtake,
                              double[] sc,
                              double[] wrk1,
                              double[] wrk2,
                              double[] wrk3)

The dogdrv_f77 method finds the next Newton iterate (xpls) by the double dogleg method. It drives dogstp_f77. Translated by Steve Verrill, April 15, 1998.

Parameters:
n - Dimension of the problem
x - The old iterate
f - Function value at the old iterate
g - Gradient or approximation at the old iterate
a - Cholesky decomposition of Hessian in lower triangular part and diagonal
p - Newton step
xpls - The new iterate
fpls - Function value at the new iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
sx - Scaling vector for x
stepmx - Maximum allowable step size
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
dlt - Trust region radius (value needs to be retained between successive calls)
iretcd - Return code: 0 --- satisfactory xpls found 1 --- failed to find satisfactory xpls sufficently distinct from x
mxtake - Boolean flag indicating that a step of maximum length was used length
sc - Workspace (current step)
wrk1 - Workspace (and place holding argument to tregup)
wrk2 - Workspace
wrk3 - Workspace

dogstp_f77

public static void dogstp_f77(int n,
                              double[] g,
                              double[][] a,
                              double[] p,
                              double[] sx,
                              double rnwtln,
                              double[] dlt,
                              boolean[] nwtake,
                              boolean[] fstdog,
                              double[] ssd,
                              double[] v,
                              double[] cln,
                              double[] eta,
                              double[] sc,
                              double[] stepmx)

The dogstp_f77 method finds the new step by the double dogleg appproach. Translated by Steve Verrill, April 21, 1998.

Parameters:
n - DIMENSION OF PROBLEM
g - GRADIENT AT CURRENT ITERATE, G(X)
a - CHOLESKY DECOMPOSITION OF HESSIAN IN LOWER PART AND DIAGONAL
p - NEWTON STEP
sx - Scaling vector for x
rnwtln - NEWTON STEP LENGTH
dlt - TRUST REGION RADIUS
nwtake - BOOLEAN, = true IF NEWTON STEP TAKEN
fstdog - BOOLEAN, = true IF ON FIRST LEG OF DOGLEG
ssd - WORKSPACE [CAUCHY STEP TO THE MINIMUM OF THE QUADRATIC MODEL IN THE SCALED STEEPEST DESCENT DIRECTION] [RETAIN VALUE BETWEEN SUCCESSIVE CALLS]
v - WORKSPACE [RETAIN VALUE BETWEEN SUCCESSIVE CALLS]
cln - CAUCHY LENGTH [RETAIN VALUE BETWEEN SUCCESSIVE CALLS]
eta - [RETAIN VALUE BETWEEN SUCCESSIVE CALLS]
sc - CURRENT STEP
stepmx - MAXIMUM ALLOWABLE STEP SIZE

forslv_f77

public static void forslv_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] b)

The forslv_f77 method solves Ax = b where A is a lower triangular matrix. Translated by Steve Verrill, April 21, 1998.

Parameters:
n - The dimension of the problem
a - The lower triangular matrix (preserved)
x - The solution vector
b - The right-hand side vector

fstocd_f77

public static void fstocd_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] sx,
                              double rnoise,
                              double[] g)

The fstocd_f77 method finds a central difference approximation to the gradient of the function to be minimized. Translated by Steve Verrill, April 21, 1998.

Parameters:
n - The dimension of the problem
x - The point at which the gradient is to be approximated
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
sx - Scaling vector for x
rnoise - Relative noise in the function to be minimized
g - A central difference approximation to the gradient

fstofd_f77

public static void fstofd_f77(int n,
                              double[] xpls,
                              optimization.Uncmin_methods minclass,
                              double[] fpls,
                              double[][] a,
                              double[] sx,
                              double rnoise,
                              double[] fhat)

This version of the fstofd_f77 method finds a finite difference approximation to the Hessian. Translated by Steve Verrill, April 22, 1998.

Parameters:
n - The dimension of the problem
xpls - New iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
fpls - fpls[1] -- fpls[n] contains the gradient of the function to minimize
a - "FINITE DIFFERENCE APPROXIMATION. ONLY LOWER TRIANGULAR MATRIX AND DIAGONAL ARE RETURNED"
sx - Scaling vector for x
rnoise - Relative noise in the function to be minimized
fhat - Workspace

fstofd_f77

public static void fstofd_f77(int n,
                              double[] xpls,
                              optimization.Uncmin_methods minclass,
                              double[] fpls,
                              double[] g,
                              double[] sx,
                              double rnoise)

This version of the fstofd_f77 method finds first order finite difference approximations for gradients. Translated by Steve Verrill, April 22, 1998.

Parameters:
n - The dimension of the problem
xpls - New iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
fpls - fpls contains the value of the function to minimize at the new iterate
g - finite difference approximation to the gradient
sx - Scaling vector for x
rnoise - Relative noise in the function to be minimized

grdchk_f77

public static void grdchk_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] f,
                              double[] g,
                              double[] typsiz,
                              double[] sx,
                              double[] fscale,
                              double rnf,
                              double analtl,
                              double[] gest)

The grdchk_f77 method checks the analytic gradient supplied by the user. Translated by Steve Verrill, April 22, 1998.

Parameters:
n - The dimension of the problem
x - The location at which the gradient is to be checked
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
f - Function value
g - Analytic gradient
typsiz - Typical size for each component of x
sx - Scaling vector for x: sx[i] = 1.0/typsiz[i]
fscale - Estimate of scale of f_to_minimize
rnf - Relative noise in f_to_minimize
analtl - Tolerance for comparison of estimated and analytical gradients
gest - Finite difference gradient

heschk_f77

public static void heschk_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] f,
                              double[] g,
                              double[][] a,
                              double[] typsiz,
                              double[] sx,
                              double rnf,
                              double analtl,
                              int[] iagflg,
                              double[] udiag,
                              double[] wrk1,
                              double[] wrk2)

The heschk_f77 method checks the analytic Hessian supplied by the user. Translated by Steve Verrill, April 23, 1998.

Parameters:
n - The dimension of the problem
x - The location at which the Hessian is to be checked
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
f - Function value
g - Gradient
a - On exit: Hessian in lower triangle
typsiz - Typical size for each component of x
sx - Scaling vector for x: sx[i] = 1.0/typsiz[i]
rnf - Relative noise in f_to_minimize
analtl - Tolerance for comparison of estimated and analytic gradients
iagflg - = 1 if an analytic gradient is supplied
udiag - Workspace
wrk1 - Workspace
wrk2 - Workspace

hookdr_f77

public static void hookdr_f77(int n,
                              double[] x,
                              double[] f,
                              double[] g,
                              double[][] a,
                              double[] udiag,
                              double[] p,
                              double[] xpls,
                              double[] fpls,
                              optimization.Uncmin_methods minclass,
                              double[] sx,
                              double[] stepmx,
                              double[] steptl,
                              double[] dlt,
                              int[] iretcd,
                              boolean[] mxtake,
                              double[] amu,
                              double[] dltp,
                              double[] phi,
                              double[] phip0,
                              double[] sc,
                              double[] xplsp,
                              double[] wrk0,
                              double epsm,
                              int[] itncnt)

The hookdr_f77 method finds a next Newton iterate (xpls) by the More-Hebdon technique. It drives hookst_f77. Translated by Steve Verrill, April 23, 1998.

Parameters:
n - The dimension of the problem
x - The old iterate
f - The function value at the old iterate
g - Gradient or approximation at old iterate
a - Cholesky decomposition of Hessian in lower triangle and diagonal. Hessian in upper triangle and udiag.
udiag - Diagonal of Hessian in a
p - Newton step
xpls - New iterate
fpls - Function value at the new iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
sx - Scaling vector for x
stepmx - Maximum allowable step size
steptl - Relative step size at which consecutive iterates are considered close enough to terminate the algorithm
dlt - Trust region radius
iretcd - Return code = 0 satisfactory xpls found = 1 failed to find satisfactory xpls sufficiently distinct from x
mxtake - Boolean flag indicating step of maximum length used
amu - [Retain value between successive calls]
dltp - [Retain value between successive calls]
phi - [Retain value between successive calls]
phip0 - [Retain value between successive calls]
sc - Workspace
xplsp - Workspace
wrk0 - Workspace
epsm - Machine epsilon
itncnt - Iteration count

hookst_f77

public static void hookst_f77(int n,
                              double[] g,
                              double[][] a,
                              double[] udiag,
                              double[] p,
                              double[] sx,
                              double rnwtln,
                              double[] dlt,
                              double[] amu,
                              double[] dltp,
                              double[] phi,
                              double[] phip0,
                              boolean[] fstime,
                              double[] sc,
                              boolean[] nwtake,
                              double[] wrk0,
                              double epsm)

The hookst_f77 method finds a new step by the More-Hebdon algorithm. It is driven by hookdr_f77. Translated by Steve Verrill, April 24, 1998.

Parameters:
n - The dimension of the problem
g - The gradient at the current iterate
a - Cholesky decomposition of the Hessian in the lower triangle and diagonal. Hessian or approximation in upper triangle (and udiag).
udiag - Diagonal of Hessian in a
p - Newton step
sx - Scaling vector for x
rnwtln - Newton step length
dlt - Trust region radius
amu - Retain value between successive calls
dltp - Trust region radius at last exit from this routine
phi - Retain value between successive calls
phip0 - Retain value between successive calls
fstime - "= true if first entry to this routine during the k-th iteration"
sc - Current step
nwtake - = true if Newton step taken
wrk0 - Workspace
epsm - Machine epsilon

hsnint_f77

public static void hsnint_f77(int n,
                              double[][] a,
                              double[] sx,
                              int[] method)

The hsnint_f77 method provides the initial Hessian when secant updates are being used. Translated by Steve Verrill, April 27, 1998.

Parameters:
n - The dimension of the problem
a - Initial Hessian (lower triangular matrix)
sx - Scaling vector for x
method - Algorithm to use to solve the minimization problem 1,2 --- factored secant method 3 --- unfactored secant method

lltslv_f77

public static void lltslv_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] b)

The lltslv_f77 method solves Ax = b where A has the form L(L transpose) but only the lower triangular part, L, is stored. Translated by Steve Verrill, April 27, 1998.

Parameters:
n - The dimension of the problem
a - Matrix of form L(L transpose). On return a is unchanged.
x - The solution vector
b - The right-hand side vector

lnsrch_f77

public static void lnsrch_f77(int n,
                              double[] x,
                              double[] f,
                              double[] g,
                              double[] p,
                              double[] xpls,
                              double[] fpls,
                              optimization.Uncmin_methods minclass,
                              boolean[] mxtake,
                              int[] iretcd,
                              double[] stepmx,
                              double[] steptl,
                              double[] sx)

The lnsrch_f77 method finds a next Newton iterate by line search. Translated by Steve Verrill, May 15, 1998.

Parameters:
n - The dimension of the problem
x - Old iterate
f - Function value at old iterate
g - Gradient or approximation at old iterate
p - Non-zero Newton step
xpls - New iterate
fpls - Function value at new iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
mxtake - Boolean flag indicating whether the step of maximum length was used
iretcd - Return code
stepmx - Maximum allowable step size
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
sx - Scaling vector for x

mvmltl_f77

public static void mvmltl_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] y)

The mvmltl_f77 method computes y = Lx where L is a lower triangular matrix stored in A. Translated by Steve Verrill, April 27, 1998.

Parameters:
n - The dimension of the problem
a - Lower triangular matrix
x - Operand vector
y - Result vector

mvmlts_f77

public static void mvmlts_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] y)

The mvmlts_f77 method computes y = Ax where A is a symmetric matrix stored in its lower triangular part. Translated by Steve Verrill, April 27, 1998.

Parameters:
n - The dimension of the problem
a - The symmetric matrix
x - Operand vector
y - Result vector

mvmltu_f77

public static void mvmltu_f77(int n,
                              double[][] a,
                              double[] x,
                              double[] y)

The mvmltu_f77 method computes Y = (L transpose)X where L is a lower triangular matrix stored in A (L transpose is taken implicitly). Translated by Steve Verrill, April 27, 1998.

Parameters:
n - The dimension of the problem
a - The lower triangular matrix
x - Operand vector
y - Result vector

optchk_f77

public static void optchk_f77(int n,
                              double[] x,
                              double[] typsiz,
                              double[] sx,
                              double[] fscale,
                              double[] gradtl,
                              int[] itnlim,
                              int[] ndigit,
                              double epsm,
                              double[] dlt,
                              int[] method,
                              int[] iexp,
                              int[] iagflg,
                              int[] iahflg,
                              double[] stepmx,
                              int[] msg)

The optchk_f77 method checks the input for reasonableness. Translated by Steve Verrill, May 12, 1998.

Parameters:
n - The dimension of the problem
x - On entry, estimate of the root of f_to_minimize
typsiz - Typical size of each component of x
sx - Scaling vector for x
fscale - Estimate of scale of objective function
gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
itnlim - Maximum number of allowable iterations
ndigit - Number of good digits in the optimization function
epsm - Machine epsilon
dlt - Trust region radius
method - Algorithm indicator
iexp - Expense flag
iagflg - = 1 if an analytic gradient is supplied
iahflg - = 1 if an analytic Hessian is supplied
stepmx - Maximum step size
msg - Message and error code

optdrv_f77

public static void optdrv_f77(int n,
                              double[] x,
                              optimization.Uncmin_methods minclass,
                              double[] typsiz,
                              double[] fscale,
                              int[] method,
                              int[] iexp,
                              int[] msg,
                              int[] ndigit,
                              int[] itnlim,
                              int[] iagflg,
                              int[] iahflg,
                              double[] dlt,
                              double[] gradtl,
                              double[] stepmx,
                              double[] steptl,
                              double[] xpls,
                              double[] fpls,
                              double[] gpls,
                              int[] itrmcd,
                              double[][] a,
                              double[] udiag,
                              double[] g,
                              double[] p,
                              double[] sx,
                              double[] wrk0,
                              double[] wrk1,
                              double[] wrk2,
                              double[] wrk3)

The optdrv_f77 method is the driver for the nonlinear optimization problem. Translated by Steve Verrill, May 18, 1998.

Parameters:
n - The dimension of the problem
x - On entry, estimate of the location of a minimum of f_to_minimize
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
typsiz - Typical size of each component of x
fscale - Estimate of scale of objective function
method - Algorithm indicator 1 -- line search 2 -- double dogleg 3 -- More-Hebdon
iexp - Expense flag. 1 -- optimization function, f_to_minimize, is expensive to evaluate 0 -- otherwise If iexp = 1, the Hessian will be evaluated by secant update rather than analytically or by finite differences.
msg - On input: (> 0) message to inhibit certain automatic checks On output: (< 0) error code (= 0, no error)
ndigit - Number of good digits in the optimization function
itnlim - Maximum number of allowable iterations
iagflg - = 1 if an analytic gradient is supplied
iahflg - = 1 if an analytic Hessian is supplied
dlt - Trust region radius
gradtl - Tolerance at which the gradient is considered close enough to zero to terminate the algorithm
stepmx - Maximum step size
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
xpls - On exit: xpls is a local minimum
fpls - On exit: function value at xpls
gpls - On exit: gradient at xpls
itrmcd - Termination code
a - workspace for Hessian (or its approximation) and its Cholesky decomposition
udiag - workspace (for diagonal of Hessian)
g - workspace (for gradient at current iterate)
p - workspace for step
sx - workspace (for scaling vector)
wrk0 - workspace
wrk1 - workspace
wrk2 - workspace
wrk3 - workspace

optstp_f77

public static void optstp_f77(int n,
                              double[] xpls,
                              double[] fpls,
                              double[] gpls,
                              double[] x,
                              int[] itncnt,
                              int[] icscmx,
                              int[] itrmcd,
                              double[] gradtl,
                              double[] steptl,
                              double[] sx,
                              double[] fscale,
                              int[] itnlim,
                              int[] iretcd,
                              boolean[] mxtake,
                              int[] msg)

The optstp_f77 method determines whether the algorithm should terminate due to any of the following: 1) problem solved within user tolerance 2) convergence within user tolerance 3) iteration limit reached 4) divergence or too restrictive maximum step (stepmx) suspected Translated by Steve Verrill, May 12, 1998.

Parameters:
n - The dimension of the problem
xpls - New iterate
fpls - Function value at new iterate
gpls - Gradient or approximation at new iterate
x - Old iterate
itncnt - Current iteration
icscmx - Number of consecutive steps >= stepmx (retain between successive calls)
itrmcd - Termination code
gradtl - Tolerance at which the relative gradient is considered close enough to zero to terminate the algorithm
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
sx - Scaling vector for x
fscale - Estimate of the scale of the objective function
itnlim - Maximum number of allowable iterations
iretcd - Return code
mxtake - Boolean flag indicating step of maximum length was used
msg - If msg includes a term 8, suppress output

qraux1_f77

public static void qraux1_f77(int n,
                              double[][] r,
                              int i)

The qraux1_f77 method interchanges rows i,i+1 of the upper Hessenberg matrix r, columns i to n. Translated by Steve Verrill, April 29, 1998.

Parameters:
n - The dimension of the matrix
r - Upper Hessenberg matrix
i - Index of row to interchange (i < n)

qraux2_f77

public static void qraux2_f77(int n,
                              double[][] r,
                              int i,
                              double a,
                              double b)

The qraux2_f77 method pre-multiplies r by the Jacobi rotation j(i,i+1,a,b). Translated by Steve Verrill, April 29, 1998.

Parameters:
n - The dimension of the matrix
r - Upper Hessenberg matrix
i - Index of row
a - scalar
b - scalar

qrupdt_f77

public static void qrupdt_f77(int n,
                              double[][] a,
                              double[] u,
                              double[] v)

The qrupdt_f77 method finds an orthogonal n by n matrix, Q*, and an upper triangular n by n matrix, R*, such that (Q*)(R*) = R+U(V+). Translated by Steve Verrill, May 11, 1998.

Parameters:
n - The dimension of the problem
a - On input: contains R On output: contains R*
u - Vector
v - Vector

result_f77

public static void result_f77(int n,
                              double[] x,
                              double[] f,
                              double[] g,
                              double[][] a,
                              double[] p,
                              int[] itncnt,
                              int iflg)

The result_f77 method prints information. Translated by Steve Verrill, May 11, 1998.

Parameters:
n - The dimension of the problem
x - Estimate of the location of a minimum at iteration k
f - function value at x
g - gradient at x
a - Hessian at x
p - Step taken
itncnt - Iteration number (k)
iflg - Flag controlling the information to print

sclmul_f77

public static void sclmul_f77(int n,
                              double s,
                              double[] v,
                              double[] z)

The sclmul_f77 method multiplies a vector by a scalar. Translated by Steve Verrill, May 8, 1998.

Parameters:
n - The dimension of the problem
s - The scalar
v - Operand vector
z - Result vector

secfac_f77

public static void secfac_f77(int n,
                              double[] x,
                              double[] g,
                              double[][] a,
                              double[] xpls,
                              double[] gpls,
                              double epsm,
                              int[] itncnt,
                              double rnf,
                              int[] iagflg,
                              boolean[] noupdt,
                              double[] s,
                              double[] y,
                              double[] u,
                              double[] w)

The secfac_f77 method updates the Hessian by the BFGS factored technique. Translated by Steve Verrill, May 14, 1998.

Parameters:
n - The dimension of the problem
x - Old iterate
g - Gradient or approximation at the old iterate
a - On entry: Cholesky decomposition of Hessian in lower triangle and diagonal On exit: Updated Cholesky decomposition of Hessian in lower triangle and diagonal
xpls - New iterate
gpls - Gradient or approximation at the new iterate
epsm - Machine epsilon
itncnt - Iteration count
rnf - Relative noise in optimization function f_to_minimize
iagflg - 1 if an analytic gradient is supplied, 0 otherwise
noupdt - Boolean: no update yet (retain value between successive calls)
s - Workspace
y - Workspace
u - Workspace
w - Workspace

secunf_f77

public static void secunf_f77(int n,
                              double[] x,
                              double[] g,
                              double[][] a,
                              double[] udiag,
                              double[] xpls,
                              double[] gpls,
                              double epsm,
                              int[] itncnt,
                              double rnf,
                              int[] iagflg,
                              boolean[] noupdt,
                              double[] s,
                              double[] y,
                              double[] t)

The secunf_f77 method updates the Hessian by the BFGS unfactored approach. Translated by Steve Verrill, May 8, 1998.

Parameters:
n - The dimension of the problem
x - The old iterate
g - The gradient or an approximation at the old iterate
a - On entry: Approximate Hessian at the old iterate in the upper triangular part (and udiag) On exit: Updated approximate Hessian at the new iterate in the lower triangular part and diagonal
udiag - On entry: Diagonal of Hessian
xpls - New iterate
gpls - Gradient or approximation at the new iterate
epsm - Machine epsilon
itncnt - Iteration count
rnf - Relative noise in the optimization function, f_to_minimize
iagflg - = 1 if an analytic gradient is supplied, = 0 otherwise
noupdt - Boolean: no update yet (retain value between calls)
s - workspace
y - workspace
t - workspace

sndofd_f77

public static void sndofd_f77(int n,
                              double[] xpls,
                              optimization.Uncmin_methods minclass,
                              double[] fpls,
                              double[][] a,
                              double[] sx,
                              double rnoise,
                              double[] stepsz,
                              double[] anbr)

The sndofd_f77 method finds second order forward finite difference approximations to the Hessian. For optimization use this method to estimate the Hessian of the optimization function if no analytical user function has been supplied for either the gradient or the Hessian, and the optimization function is inexpensive to evaluate. Translated by Steve Verrill, May 8, 1998.

Parameters:
n - The dimension of the problem
xpls - New iterate
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
fpls - Function value at the new iterate
a - "FINITE DIFFERENCE APPROXIMATION TO HESSIAN. ONLY LOWER TRIANGULAR MATRIX AND DIAGONAL ARE RETURNED"
sx - Scaling vector for x
rnoise - Relative noise in the function to be minimized
stepsz - Workspace (stepsize in i-th component direction)
anbr - Workspace (neighbor in i-th direction)

tregup_f77

public static void tregup_f77(int n,
                              double[] x,
                              double[] f,
                              double[] g,
                              double[][] a,
                              optimization.Uncmin_methods minclass,
                              double[] sc,
                              double[] sx,
                              boolean[] nwtake,
                              double[] stepmx,
                              double[] steptl,
                              double[] dlt,
                              int[] iretcd,
                              double[] xplsp,
                              double[] fplsp,
                              double[] xpls,
                              double[] fpls,
                              boolean[] mxtake,
                              int method,
                              double[] udiag)

The tregup_f77 method decides whether to accept xpls = x + sc as the next iterate and update the trust region dlt. Translated by Steve Verrill, May 11, 1998.

Parameters:
n - The dimension of the problem
x - Old iterate
f - Function value at old iterate
g - Gradient or approximation at old iterate
a - Cholesky decomposition of Hessian in lower triangular part and diagonal. Hessian or approximation in upper triangular part.
minclass - A class that implements the Uncmin_methods interface (see the definition in Uncmin_methods.java). See UncminTest_f77.java for an example of such a class. The class must define: 1.) a method, f_to_minimize, to minimize. f_to_minimize must have the form public static double f_to_minimize(double x[]) where x is the vector of arguments to the function and the return value is the value of the function evaluated at x. 2.) a method, gradient, that has the form public static void gradient(double x[], double g[]) where g is the gradient of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the gradient. 3.) a method, hessian, that has the form public static void hessian(double x[], double h[][]) where h is the Hessian of f evaluated at x. This method will have an empty body if the user does not wish to provide an analytic estimate of the Hessian. If the user wants Uncmin to check the Hessian, then the hessian method should only fill the lower triangle (and diagonal) of h.
sc - Current step
sx - Scaling vector for x
nwtake - Boolean, = true if Newton step taken
stepmx - Maximum allowable step size
steptl - Relative step size at which successive iterates are considered close enough to terminate the algorithm
dlt - Trust region radius
iretcd - Return code = 0 xpls accepted as next iterate, dlt is the trust region radius for the next iteration = 1 xpls unsatisfactory but accepted as next iterate because xpls - x is less than the smallest allowable step length = 2 f(xpls) too large. Continue current iteration with new reduced dlt. = 3 f(xpls) sufficiently small, but quadratic model predicts f(xpls) sufficiently well to continue current iteration with new doubled dlt.
xplsp - Workspace (value needs to be retained between successive calls of k-th global step)
fplsp - Retain between successive calls
xpls - New iterate
fpls - Function value at new iterate
mxtake - Boolean flag indicating step of maximum length used
method - Algorithm to use to solve minimization problem = 1 Line search = 2 Double dogleg = 3 More-Hebdon
udiag - Diagonal of Hessian in a

Parallel Colt 0.7.2

Jump to the Parallel Colt Homepage