This file is indexed.

/usr/share/octave/site/m/nlopt/nlopt_optimize.m is in octave-nlopt 2.4.2+dfsg-2.

This file is owned by root:root, with mode 0o644.

The actual contents of the file can be viewed below.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
% Usage: [xopt, fopt, retcode] = nlopt_optimize(opt, xinit)
%
% Optimizes (minimizes or maximizes) a nonlinear function under
% nonlinear constraints from the starting guess xinit, where the
% objective, constraints, stopping criteria, and other options are 
% specified in the structure opt described below.  A variety of local
% and global optimization algorithms can be used, as specified by the 
% opt.algorithm parameter described below.  Returns the optimum
% function value fopt, the location xopt of the optimum, and a
% return code retcode described below (> 0 on success).
%
% The dimension (n) of the problem, i.e. the number of design variables,
% is specified implicitly via the length of xinit.
%
% This function is a front-end for the external routine nlopt_optimize
% in the free NLopt nonlinear-optimization library, which is a wrapper
% around a number of free/open-source optimization subroutines.  More
% details can be found on the NLopt web page (ab-initio.mit.edu/nlopt)
% and also under 'man nlopt_minimize' on Unix.
%
% OBJECTIVE FUNCTION:
%
% The objective function f is specified via opt.min_objective or
% opt.max_objective for minimization or maximization, respectively.
% opt.min/max_objective should be a handle (@) to a function of the form:
%
%    [val, gradient] = f(x)
%
% where x is a row vector, val is the function value f(x), and gradient
% is a row vector giving the gradient of the function with respect to x.
% The gradient is only used for gradient-based optimization algorithms;
% some of the algorithms (below) are derivative-free and only require
% f to return val (its value).
%
% BOUND CONSTRAINTS:
%
% Lower and/or upper bounds for the design variables x are specified
% via opt.lower_bounds and/or opt.upper_bounds, respectively: these
% are vectors (of the same length as xinit, above) giving the bounds
% in each component. An unbounded component may be specified by a
% lower/upper bound of -inf/+inf, respectively.  If opt.lower_bounds
% and/or opt.upper_bounds are not specified, the default bounds are
% -inf/+inf (i.e. unbounded), respectively.
%
% NONLINEAR CONSTRAINTS:
%
% Several of the algorithms in NLopt (MMA, COBYLA, and ORIG_DIRECT) also
% support arbitrary nonlinear inequality constraints, and some also allow
% nonlinear equality constraints (ISRES and AUGLAG). For these 
% algorithms, you can specify as many nonlinear constraints as you wish.
% (The default is no nonlinear constraints.)
%
% Inequality constraints of the form fc{i}(x) <= 0 are specified via opt.fc,
% which is a cell array of function handles (@) of the same form as
% the objective function above (i.e., returning the value and optionally
% the gradient of the constraint function fc{i}, where the gradient is
% only needed for gradient-based algorithms).
%
% Equality constraints of the form h{i}(x) = 0 are specified via opt.h,
% which is a cell array of function handles (@) of the same form as
% the objective function above (i.e., returning the value and optionally
% the gradient of the constraint function h{i}, where the gradient is
% only needed for gradient-based algorithms).
%
% For both inequality and equality constraints, you can supply a
% "tolerance" for each constraint: this tolerance is used for convergence
% tests only, and a point x is considered feasible for purposes of
% convergence if the constraint is violated by the given tolerance.
% The tolerances are specified via opt.fc_tol and opt.h_tol, respectively,
% which must be vectors of the same length as opt.fc and opt.h, so
% that opt.fc_tol(i) is the tolerance for opt.fc{i} and opt.h_tol(i)
% is the tolerance for opt.h{i}.  These tolerances default to zero; a
% small nonzero tolerance is recommended, however, especially for h_tol.
%
% ALGORITHMS
%
% The optimization algorithm must be specified via opt.algorithm.
%
% The algorithm should be one of the following constants (name and
% interpretation are the same as for the C language interface).  Names
% with _G*_ are global optimization, and names with _L*_ are local
% optimization.  Names with _*N_ are derivative-free, while names
% with _*D_ are gradient-based algorithms.  Algorithms:
%
% NLOPT_GD_MLSL_LDS, NLOPT_GD_MLSL, NLOPT_GD_STOGO, NLOPT_GD_STOGO_RAND, 
% NLOPT_GN_CRS2_LM, NLOPT_GN_DIRECT_L, NLOPT_GN_DIRECT_L_NOSCAL, 
% NLOPT_GN_DIRECT_L_RAND, NLOPT_GN_DIRECT_L_RAND_NOSCAL, NLOPT_GN_DIRECT, 
% NLOPT_GN_DIRECT_NOSCAL, NLOPT_GN_ISRES, NLOPT_GN_MLSL_LDS, NLOPT_GN_MLSL, 
% NLOPT_GN_ORIG_DIRECT_L, NLOPT_GN_ORIG_DIRECT, NLOPT_LD_AUGLAG_EQ, 
% NLOPT_LD_AUGLAG, NLOPT_LD_LBFGS, NLOPT_LD_LBFGS_NOCEDAL, NLOPT_LD_MMA, 
% NLOPT_LD_TNEWTON, NLOPT_LD_TNEWTON_PRECOND, 
% NLOPT_LD_TNEWTON_PRECOND_RESTART, NLOPT_LD_TNEWTON_RESTART, 
% NLOPT_LD_VAR1, NLOPT_LD_VAR2, NLOPT_LN_AUGLAG_EQ, NLOPT_LN_AUGLAG, 
% NLOPT_LN_BOBYQA, NLOPT_LN_COBYLA, NLOPT_LN_NELDERMEAD, 
% NLOPT_LN_NEWUOA_BOUND, NLOPT_LN_NEWUOA, NLOPT_LN_PRAXIS, NLOPT_LN_SBPLX
%
% For more information on individual algorithms, see their individual
% help pages (e.g. "help NLOPT_LN_SBPLX").
%
% STOPPING CRITERIA:
%
% Multiple stopping criteria can be specified by setting one or more of
% the following fields of opt.  The optimization halts whenever any
% one of the given criteria is satisfied.
%
% opt.stopval: Stop when an objective value of at least stopval is found.
%    That is, stop minimizing when a value <= stopval is found, or stop
%    maximizing when a value >= stopval is found.
%
% opt.ftol_rel: Relative tolerance on function value, to stop when
%    an optimization step (or an estimate of the optimum) changes
%    the function value by less than opt.ftol_rel multiplied by
%    the absolute value of the function.
%
% opt.ftol_abs: Absolute tolerance on function value, to stop when
%    an optimization step (or an estimate of the optimum) changes
%    the function value by less than opt.ftol_abs.
%
% opt.xtol_rel: Relative tolerance on function value, to stop when
%    an optimization step (or an estimate of the optimum) changes
%    every component of x by less than opt.xtol_rel multiplied by
%    the absolute value of that component of x.
%
% opt.xtol_abs: Absolute tolerance on function value, to stop when
%    an optimization step (or an estimate of the optimum) changes
%    every component of x by less than that component of opt.xtol_abs
%    -- should be a vector of same length as x.
%
% opt.maxeval: Maximum number of function evaluations.
%
% opt.maxtime: Maximum runtime (in seconds) for the optimization.
%
% RETURN CODE:
%
% The retcode result is positive upon successful completion, and
% negative for an error.  The specific values are:
%
% generic success code: +1
%      stopval reached: +2
%         ftol reached: +3
%         xtol reached: +4
%      maxeval reached: +5
%      maxtime reached: +6
% generic failure code: -1
%    invalid arguments: -2
%        out of memory: -3
%     roundoff-limited: -4
%
% LOCAL OPTIMIZER:
%
% Some of the algorithms, especially MLSL and AUGLAG, use a different
% optimization algorithm as a subroutine, typically for local optimization.
% By default, they use MMA or COBYLA for gradient-based or derivative-free
% searching, respectively.  However, you can change this by specifying
% opt.local_optimizer: this is a structure with the same types of fields as opt
% (stopping criteria, algorithm, etcetera).  The objective function
% and nonlinear constraint parameters of opt.local_optimizer are ignored.
%
% INITIAL STEP SIZE:
%
% For derivative-free local-optimization algorithms, the optimizer must
% somehow decide on some initial step size to perturb x by when it begins
% the optimization. This step size should be big enough that the value
% of the objective changes significantly, but not too big if you want to
% find the local optimum nearest to x. By default, NLopt chooses this
% initial step size heuristically from the bounds, tolerances, and other
% information, but this may not always be the best choice.
%
% You can modify the initial step by setting opt.initial_step, which
% is a vector of the same length as x containing the (nonzero) initial
% step size for each component of x.
%
% STOCHASTIC POPULATION:
%
% Several of the stochastic search algorithms (e.g., CRS, MLSL, and
% ISRES) start by generating some initial "population" of random points
% x. By default, this initial population size is chosen heuristically in
% some algorithm-specific way, but the initial population can by changed
% by setting opt.population to the desired initial population size.
%
% VERBOSE OUTPUT:
%
% If opt.verbose is set to a nonzero value, then nlopt_optimize
% will print out verbose output; for example, it will print the
% value of the objective function after each evaluation.
%
% MORE INFORMATION:
%
% For more documentation, such as a detailed description of all the
% algorithms, see the NLopt home page: http://ab-initio.mit.edu/nlopt